00:00:00.000  Started by upstream project "autotest-per-patch" build number 132809
00:00:00.001  originally caused by:
00:00:00.001   Started by user sys_sgci
00:00:00.154  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy
00:00:00.155  The recommended git tool is: git
00:00:00.155  using credential 00000000-0000-0000-0000-000000000002
00:00:00.158   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.198  Fetching changes from the remote Git repository
00:00:00.199   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.236  Using shallow fetch with depth 1
00:00:00.236  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.236   > git --version # timeout=10
00:00:00.268   > git --version # 'git version 2.39.2'
00:00:00.268  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.284  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.284   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:07.704   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:07.716   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:07.730  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:07.730   > git config core.sparsecheckout # timeout=10
00:00:07.742   > git read-tree -mu HEAD # timeout=10
00:00:07.757   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:07.787  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:07.787   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:07.920  [Pipeline] Start of Pipeline
00:00:07.929  [Pipeline] library
00:00:07.931  Loading library shm_lib@master
00:00:07.931  Library shm_lib@master is cached. Copying from home.
00:00:07.946  [Pipeline] node
00:00:07.957  Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest
00:00:07.959  [Pipeline] {
00:00:07.969  [Pipeline] catchError
00:00:07.970  [Pipeline] {
00:00:07.980  [Pipeline] wrap
00:00:07.987  [Pipeline] {
00:00:07.998  [Pipeline] stage
00:00:08.001  [Pipeline] { (Prologue)
00:00:08.246  [Pipeline] sh
00:00:08.536  + logger -p user.info -t JENKINS-CI
00:00:08.572  [Pipeline] echo
00:00:08.579  Node: WFP4
00:00:08.594  [Pipeline] sh
00:00:08.890  [Pipeline] setCustomBuildProperty
00:00:08.903  [Pipeline] echo
00:00:08.904  Cleanup processes
00:00:08.910  [Pipeline] sh
00:00:09.193  + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:00:09.193  2815380 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:00:09.206  [Pipeline] sh
00:00:09.489  ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:00:09.489  ++ grep -v 'sudo pgrep'
00:00:09.489  ++ awk '{print $1}'
00:00:09.489  + sudo kill -9
00:00:09.489  + true
00:00:09.499  [Pipeline] cleanWs
00:00:09.507  [WS-CLEANUP] Deleting project workspace...
00:00:09.507  [WS-CLEANUP] Deferred wipeout is used...
00:00:09.512  [WS-CLEANUP] done
00:00:09.515  [Pipeline] setCustomBuildProperty
00:00:09.524  [Pipeline] sh
00:00:09.802  + sudo git config --global --replace-all safe.directory '*'
00:00:09.920  [Pipeline] httpRequest
00:00:10.632  [Pipeline] echo
00:00:10.633  Sorcerer 10.211.164.112 is alive
00:00:10.641  [Pipeline] retry
00:00:10.644  [Pipeline] {
00:00:10.656  [Pipeline] httpRequest
00:00:10.659  HttpMethod: GET
00:00:10.659  URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:10.660  Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:10.675  Response Code: HTTP/1.1 200 OK
00:00:10.676  Success: Status code 200 is in the accepted range: 200,404
00:00:10.676  Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:20.172  [Pipeline] }
00:00:20.191  [Pipeline] // retry
00:00:20.199  [Pipeline] sh
00:00:20.485  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:20.501  [Pipeline] httpRequest
00:00:20.884  [Pipeline] echo
00:00:20.886  Sorcerer 10.211.164.112 is alive
00:00:20.895  [Pipeline] retry
00:00:20.897  [Pipeline] {
00:00:20.911  [Pipeline] httpRequest
00:00:20.914  HttpMethod: GET
00:00:20.915  URL: http://10.211.164.112/packages/spdk_06358c25081129256abcc28a5821dd2ecca7e06d.tar.gz
00:00:20.915  Sending request to url: http://10.211.164.112/packages/spdk_06358c25081129256abcc28a5821dd2ecca7e06d.tar.gz
00:00:20.937  Response Code: HTTP/1.1 200 OK
00:00:20.937  Success: Status code 200 is in the accepted range: 200,404
00:00:20.938  Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_06358c25081129256abcc28a5821dd2ecca7e06d.tar.gz
00:06:08.388  [Pipeline] }
00:06:08.405  [Pipeline] // retry
00:06:08.411  [Pipeline] sh
00:06:08.694  + tar --no-same-owner -xf spdk_06358c25081129256abcc28a5821dd2ecca7e06d.tar.gz
00:06:11.239  [Pipeline] sh
00:06:11.522  + git -C spdk log --oneline -n5
00:06:11.522  06358c250 bdev/nvme: use poll_group's fd_group to register interrupts
00:06:11.522  1ae735a5d nvme: add poll_group interrupt callback
00:06:11.522  f80471632 nvme: add spdk_nvme_poll_group_get_fd_group()
00:06:11.522  969b360d9 thread: fd_group-based interrupts
00:06:11.522  851f166ec thread: move interrupt allocation to a function
00:06:11.532  [Pipeline] }
00:06:11.545  [Pipeline] // stage
00:06:11.553  [Pipeline] stage
00:06:11.555  [Pipeline] { (Prepare)
00:06:11.569  [Pipeline] writeFile
00:06:11.583  [Pipeline] sh
00:06:11.864  + logger -p user.info -t JENKINS-CI
00:06:11.876  [Pipeline] sh
00:06:12.158  + logger -p user.info -t JENKINS-CI
00:06:12.169  [Pipeline] sh
00:06:12.451  + cat autorun-spdk.conf
00:06:12.451  SPDK_RUN_FUNCTIONAL_TEST=1
00:06:12.451  SPDK_TEST_NVMF=1
00:06:12.451  SPDK_TEST_NVME_CLI=1
00:06:12.451  SPDK_TEST_NVMF_TRANSPORT=tcp
00:06:12.451  SPDK_TEST_NVMF_NICS=e810
00:06:12.451  SPDK_TEST_VFIOUSER=1
00:06:12.451  SPDK_RUN_UBSAN=1
00:06:12.451  NET_TYPE=phy
00:06:12.458  RUN_NIGHTLY=0
00:06:12.463  [Pipeline] readFile
00:06:12.484  [Pipeline] withEnv
00:06:12.486  [Pipeline] {
00:06:12.497  [Pipeline] sh
00:06:12.780  + set -ex
00:06:12.780  + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]]
00:06:12.780  + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf
00:06:12.780  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:06:12.780  ++ SPDK_TEST_NVMF=1
00:06:12.780  ++ SPDK_TEST_NVME_CLI=1
00:06:12.780  ++ SPDK_TEST_NVMF_TRANSPORT=tcp
00:06:12.780  ++ SPDK_TEST_NVMF_NICS=e810
00:06:12.780  ++ SPDK_TEST_VFIOUSER=1
00:06:12.780  ++ SPDK_RUN_UBSAN=1
00:06:12.780  ++ NET_TYPE=phy
00:06:12.780  ++ RUN_NIGHTLY=0
00:06:12.780  + case $SPDK_TEST_NVMF_NICS in
00:06:12.780  + DRIVERS=ice
00:06:12.780  + [[ tcp == \r\d\m\a ]]
00:06:12.780  + [[ -n ice ]]
00:06:12.780  + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4
00:06:12.780  rmmod: ERROR: Module mlx4_ib is not currently loaded
00:06:12.780  rmmod: ERROR: Module mlx5_ib is not currently loaded
00:06:12.780  rmmod: ERROR: Module i40iw is not currently loaded
00:06:12.780  rmmod: ERROR: Module iw_cxgb4 is not currently loaded
00:06:12.780  + true
00:06:12.780  + for D in $DRIVERS
00:06:12.780  + sudo modprobe ice
00:06:12.780  + exit 0
00:06:12.789  [Pipeline] }
00:06:12.802  [Pipeline] // withEnv
00:06:12.806  [Pipeline] }
00:06:12.818  [Pipeline] // stage
00:06:12.826  [Pipeline] catchError
00:06:12.828  [Pipeline] {
00:06:12.839  [Pipeline] timeout
00:06:12.839  Timeout set to expire in 1 hr 0 min
00:06:12.841  [Pipeline] {
00:06:12.853  [Pipeline] stage
00:06:12.854  [Pipeline] { (Tests)
00:06:12.866  [Pipeline] sh
00:06:13.150  + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest
00:06:13.150  ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest
00:06:13.150  + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest
00:06:13.150  + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]]
00:06:13.150  + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:06:13.150  + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output
00:06:13.150  + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]]
00:06:13.150  + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]]
00:06:13.150  + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output
00:06:13.150  + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]]
00:06:13.150  + [[ nvmf-tcp-phy-autotest == pkgdep-* ]]
00:06:13.150  + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest
00:06:13.150  + source /etc/os-release
00:06:13.150  ++ NAME='Fedora Linux'
00:06:13.150  ++ VERSION='39 (Cloud Edition)'
00:06:13.150  ++ ID=fedora
00:06:13.150  ++ VERSION_ID=39
00:06:13.150  ++ VERSION_CODENAME=
00:06:13.150  ++ PLATFORM_ID=platform:f39
00:06:13.150  ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)'
00:06:13.150  ++ ANSI_COLOR='0;38;2;60;110;180'
00:06:13.150  ++ LOGO=fedora-logo-icon
00:06:13.150  ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39
00:06:13.150  ++ HOME_URL=https://fedoraproject.org/
00:06:13.150  ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/
00:06:13.150  ++ SUPPORT_URL=https://ask.fedoraproject.org/
00:06:13.150  ++ BUG_REPORT_URL=https://bugzilla.redhat.com/
00:06:13.150  ++ REDHAT_BUGZILLA_PRODUCT=Fedora
00:06:13.150  ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39
00:06:13.150  ++ REDHAT_SUPPORT_PRODUCT=Fedora
00:06:13.150  ++ REDHAT_SUPPORT_PRODUCT_VERSION=39
00:06:13.150  ++ SUPPORT_END=2024-11-12
00:06:13.150  ++ VARIANT='Cloud Edition'
00:06:13.150  ++ VARIANT_ID=cloud
00:06:13.150  + uname -a
00:06:13.150  Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux
00:06:13.150  + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status
00:06:15.686  Hugepages
00:06:15.686  node     hugesize     free /  total
00:06:15.686  node0   1048576kB        0 /      0
00:06:15.686  node0      2048kB        0 /      0
00:06:15.686  node1   1048576kB        0 /      0
00:06:15.686  node1      2048kB        0 /      0
00:06:15.686  
00:06:15.686  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:06:15.686  I/OAT                     0000:00:04.0    8086   2021   0       ioatdma          -          -
00:06:15.686  I/OAT                     0000:00:04.1    8086   2021   0       ioatdma          -          -
00:06:15.686  I/OAT                     0000:00:04.2    8086   2021   0       ioatdma          -          -
00:06:15.686  I/OAT                     0000:00:04.3    8086   2021   0       ioatdma          -          -
00:06:15.686  I/OAT                     0000:00:04.4    8086   2021   0       ioatdma          -          -
00:06:15.686  I/OAT                     0000:00:04.5    8086   2021   0       ioatdma          -          -
00:06:15.686  I/OAT                     0000:00:04.6    8086   2021   0       ioatdma          -          -
00:06:15.686  I/OAT                     0000:00:04.7    8086   2021   0       ioatdma          -          -
00:06:15.686  NVMe                      0000:5e:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:06:15.686  I/OAT                     0000:80:04.0    8086   2021   1       ioatdma          -          -
00:06:15.686  I/OAT                     0000:80:04.1    8086   2021   1       ioatdma          -          -
00:06:15.686  I/OAT                     0000:80:04.2    8086   2021   1       ioatdma          -          -
00:06:15.686  I/OAT                     0000:80:04.3    8086   2021   1       ioatdma          -          -
00:06:15.686  I/OAT                     0000:80:04.4    8086   2021   1       ioatdma          -          -
00:06:15.686  I/OAT                     0000:80:04.5    8086   2021   1       ioatdma          -          -
00:06:15.686  I/OAT                     0000:80:04.6    8086   2021   1       ioatdma          -          -
00:06:15.686  I/OAT                     0000:80:04.7    8086   2021   1       ioatdma          -          -
00:06:15.686  + rm -f /tmp/spdk-ld-path
00:06:15.686  + source autorun-spdk.conf
00:06:15.686  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:06:15.686  ++ SPDK_TEST_NVMF=1
00:06:15.686  ++ SPDK_TEST_NVME_CLI=1
00:06:15.686  ++ SPDK_TEST_NVMF_TRANSPORT=tcp
00:06:15.686  ++ SPDK_TEST_NVMF_NICS=e810
00:06:15.686  ++ SPDK_TEST_VFIOUSER=1
00:06:15.686  ++ SPDK_RUN_UBSAN=1
00:06:15.686  ++ NET_TYPE=phy
00:06:15.686  ++ RUN_NIGHTLY=0
00:06:15.686  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:06:15.686  + [[ -n '' ]]
00:06:15.686  + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:06:15.686  + for M in /var/spdk/build-*-manifest.txt
00:06:15.686  + [[ -f /var/spdk/build-kernel-manifest.txt ]]
00:06:15.686  + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/
00:06:15.686  + for M in /var/spdk/build-*-manifest.txt
00:06:15.686  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:06:15.686  + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/
00:06:15.686  + for M in /var/spdk/build-*-manifest.txt
00:06:15.686  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:06:15.686  + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/
00:06:15.686  ++ uname
00:06:15.686  + [[ Linux == \L\i\n\u\x ]]
00:06:15.686  + sudo dmesg -T
00:06:15.946  + sudo dmesg --clear
00:06:15.946  + dmesg_pid=2817375
00:06:15.946  + [[ Fedora Linux == FreeBSD ]]
00:06:15.946  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:06:15.946  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:06:15.946  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:06:15.946  + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:06:15.946  + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2
00:06:15.946  + [[ -x /usr/src/fio-static/fio ]]
00:06:15.946  + sudo dmesg -Tw
00:06:15.946  + export FIO_BIN=/usr/src/fio-static/fio
00:06:15.946  + FIO_BIN=/usr/src/fio-static/fio
00:06:15.946  + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]]
00:06:15.946  + [[ ! -v VFIO_QEMU_BIN ]]
00:06:15.946  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:06:15.946  + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:06:15.946  + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:06:15.946  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:06:15.946  + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:06:15.946  + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:06:15.946  + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf
00:06:15.946    23:47:31  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:06:15.946   23:47:31  -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf
00:06:15.946    23:47:31  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:06:15.946    23:47:31  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1
00:06:15.946    23:47:31  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1
00:06:15.946    23:47:31  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp
00:06:15.946    23:47:31  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810
00:06:15.946    23:47:31  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1
00:06:15.946    23:47:31  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1
00:06:15.946    23:47:31  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy
00:06:15.946    23:47:31  -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0
00:06:15.946   23:47:31  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:06:15.946   23:47:31  -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf
00:06:15.946     23:47:31  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:06:15.946    23:47:31  -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:06:15.946     23:47:31  -- scripts/common.sh@15 -- $ shopt -s extglob
00:06:15.946     23:47:31  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:06:15.946     23:47:31  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:15.946     23:47:31  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:15.946      23:47:31  -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:15.946      23:47:31  -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:15.946      23:47:31  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:15.946      23:47:31  -- paths/export.sh@5 -- $ export PATH
00:06:15.946      23:47:31  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:06:15.946    23:47:31  -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output
00:06:15.946      23:47:31  -- common/autobuild_common.sh@493 -- $ date +%s
00:06:15.946     23:47:31  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733784451.XXXXXX
00:06:15.946    23:47:31  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733784451.wNV1j7
00:06:15.946    23:47:31  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:06:15.946    23:47:31  -- common/autobuild_common.sh@499 -- $ '[' -n '' ']'
00:06:15.946    23:47:31  -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/'
00:06:15.946    23:47:31  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp'
00:06:15.946    23:47:31  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs'
00:06:15.946     23:47:31  -- common/autobuild_common.sh@509 -- $ get_config_params
00:06:15.946     23:47:31  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:06:15.946     23:47:31  -- common/autotest_common.sh@10 -- $ set +x
00:06:15.946    23:47:31  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user'
00:06:15.946    23:47:31  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:06:15.946    23:47:31  -- pm/common@17 -- $ local monitor
00:06:15.946    23:47:31  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:06:15.946    23:47:31  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:06:15.946    23:47:31  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:06:15.946     23:47:31  -- pm/common@21 -- $ date +%s
00:06:15.946    23:47:31  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:06:15.946     23:47:31  -- pm/common@21 -- $ date +%s
00:06:15.946    23:47:31  -- pm/common@25 -- $ sleep 1
00:06:15.946     23:47:31  -- pm/common@21 -- $ date +%s
00:06:15.946     23:47:31  -- pm/common@21 -- $ date +%s
00:06:15.946    23:47:31  -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733784451
00:06:15.946    23:47:31  -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733784451
00:06:15.946    23:47:31  -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733784451
00:06:15.946    23:47:31  -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733784451
00:06:16.205  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733784451_collect-vmstat.pm.log
00:06:16.205  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733784451_collect-cpu-load.pm.log
00:06:16.205  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733784451_collect-cpu-temp.pm.log
00:06:16.205  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733784451_collect-bmc-pm.bmc.pm.log
00:06:17.142    23:47:32  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:06:17.142   23:47:32  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:06:17.142   23:47:32  -- spdk/autobuild.sh@12 -- $ umask 022
00:06:17.142   23:47:32  -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:06:17.142   23:47:32  -- spdk/autobuild.sh@16 -- $ date -u
00:06:17.142  Mon Dec  9 10:47:32 PM UTC 2024
00:06:17.142   23:47:32  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:06:17.142  v25.01-pre-321-g06358c250
00:06:17.142   23:47:32  -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']'
00:06:17.142   23:47:32  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:06:17.142   23:47:32  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:06:17.142   23:47:32  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:06:17.142   23:47:32  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:06:17.142   23:47:32  -- common/autotest_common.sh@10 -- $ set +x
00:06:17.142  ************************************
00:06:17.142  START TEST ubsan
00:06:17.142  ************************************
00:06:17.142   23:47:32 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:06:17.142  using ubsan
00:06:17.142  
00:06:17.142  real	0m0.000s
00:06:17.142  user	0m0.000s
00:06:17.142  sys	0m0.000s
00:06:17.142   23:47:32 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:06:17.142   23:47:32 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:06:17.142  ************************************
00:06:17.142  END TEST ubsan
00:06:17.142  ************************************
00:06:17.142   23:47:32  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:06:17.142   23:47:32  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:06:17.142   23:47:32  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:06:17.142   23:47:32  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:06:17.142   23:47:32  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:06:17.142   23:47:32  -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]]
00:06:17.142   23:47:32  -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]]
00:06:17.142   23:47:32  -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]]
00:06:17.143   23:47:32  -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared
00:06:17.401  Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk
00:06:17.401  Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build
00:06:17.660  Using 'verbs' RDMA provider
00:06:30.805  Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done.
00:06:43.160  Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done.
00:06:43.160  Creating mk/config.mk...done.
00:06:43.160  Creating mk/cc.flags.mk...done.
00:06:43.160  Type 'make' to build.
00:06:43.160   23:47:58  -- spdk/autobuild.sh@70 -- $ run_test make make -j96
00:06:43.160   23:47:58  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:06:43.160   23:47:58  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:06:43.160   23:47:58  -- common/autotest_common.sh@10 -- $ set +x
00:06:43.160  ************************************
00:06:43.160  START TEST make
00:06:43.160  ************************************
00:06:43.160   23:47:58 make -- common/autotest_common.sh@1129 -- $ make -j96
00:06:43.160  make[1]: Nothing to be done for 'all'.
00:06:44.554  The Meson build system
00:06:44.554  Version: 1.5.0
00:06:44.554  Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user
00:06:44.555  Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug
00:06:44.555  Build type: native build
00:06:44.555  Project name: libvfio-user
00:06:44.555  Project version: 0.0.1
00:06:44.555  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:06:44.555  C linker for the host machine: cc ld.bfd 2.40-14
00:06:44.555  Host machine cpu family: x86_64
00:06:44.555  Host machine cpu: x86_64
00:06:44.555  Run-time dependency threads found: YES
00:06:44.555  Library dl found: YES
00:06:44.555  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:06:44.555  Run-time dependency json-c found: YES 0.17
00:06:44.555  Run-time dependency cmocka found: YES 1.1.7
00:06:44.555  Program pytest-3 found: NO
00:06:44.555  Program flake8 found: NO
00:06:44.555  Program misspell-fixer found: NO
00:06:44.555  Program restructuredtext-lint found: NO
00:06:44.555  Program valgrind found: YES (/usr/bin/valgrind)
00:06:44.555  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:06:44.555  Compiler for C supports arguments -Wmissing-declarations: YES 
00:06:44.555  Compiler for C supports arguments -Wwrite-strings: YES 
00:06:44.555  ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:06:44.555  Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh)
00:06:44.555  Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh)
00:06:44.555  ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup.
00:06:44.555  Build targets in project: 8
00:06:44.555  WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions:
00:06:44.555   * 0.57.0: {'exclude_suites arg in add_test_setup'}
00:06:44.555  
00:06:44.555  libvfio-user 0.0.1
00:06:44.555  
00:06:44.555    User defined options
00:06:44.555      buildtype      : debug
00:06:44.555      default_library: shared
00:06:44.555      libdir         : /usr/local/lib
00:06:44.555  
00:06:44.555  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:06:45.487  ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug'
00:06:45.487  [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o
00:06:45.487  [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o
00:06:45.487  [3/37] Compiling C object samples/client.p/.._lib_tran.c.o
00:06:45.487  [4/37] Compiling C object samples/lspci.p/lspci.c.o
00:06:45.487  [5/37] Compiling C object samples/client.p/.._lib_migration.c.o
00:06:45.487  [6/37] Compiling C object samples/null.p/null.c.o
00:06:45.487  [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o
00:06:45.487  [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o
00:06:45.487  [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o
00:06:45.487  [10/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o
00:06:45.487  [11/37] Compiling C object test/unit_tests.p/mocks.c.o
00:06:45.487  [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o
00:06:45.487  [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o
00:06:45.487  [14/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o
00:06:45.487  [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o
00:06:45.487  [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o
00:06:45.487  [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o
00:06:45.487  [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o
00:06:45.487  [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o
00:06:45.487  [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o
00:06:45.487  [21/37] Compiling C object samples/server.p/server.c.o
00:06:45.487  [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o
00:06:45.487  [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o
00:06:45.487  [24/37] Compiling C object samples/client.p/client.c.o
00:06:45.487  [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o
00:06:45.487  [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o
00:06:45.487  [27/37] Linking target samples/client
00:06:45.487  [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o
00:06:45.487  [29/37] Linking target test/unit_tests
00:06:45.487  [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o
00:06:45.744  [31/37] Linking target lib/libvfio-user.so.0.0.1
00:06:45.744  [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols
00:06:45.744  [33/37] Linking target samples/gpio-pci-idio-16
00:06:45.744  [34/37] Linking target samples/server
00:06:45.744  [35/37] Linking target samples/null
00:06:45.744  [36/37] Linking target samples/lspci
00:06:45.744  [37/37] Linking target samples/shadow_ioeventfd_server
00:06:45.744  INFO: autodetecting backend as ninja
00:06:45.744  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug
00:06:45.744  DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug
00:06:46.311  ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug'
00:06:46.311  ninja: no work to do.
00:06:51.586  The Meson build system
00:06:51.586  Version: 1.5.0
00:06:51.586  Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk
00:06:51.586  Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp
00:06:51.586  Build type: native build
00:06:51.586  Program cat found: YES (/usr/bin/cat)
00:06:51.586  Project name: DPDK
00:06:51.586  Project version: 24.03.0
00:06:51.586  C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)")
00:06:51.586  C linker for the host machine: cc ld.bfd 2.40-14
00:06:51.586  Host machine cpu family: x86_64
00:06:51.586  Host machine cpu: x86_64
00:06:51.586  Message: ## Building in Developer Mode ##
00:06:51.586  Program pkg-config found: YES (/usr/bin/pkg-config)
00:06:51.586  Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh)
00:06:51.586  Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:06:51.586  Program python3 found: YES (/usr/bin/python3)
00:06:51.586  Program cat found: YES (/usr/bin/cat)
00:06:51.586  Compiler for C supports arguments -march=native: YES 
00:06:51.586  Checking for size of "void *" : 8 
00:06:51.586  Checking for size of "void *" : 8 (cached)
00:06:51.586  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:06:51.586  Library m found: YES
00:06:51.586  Library numa found: YES
00:06:51.586  Has header "numaif.h" : YES 
00:06:51.586  Library fdt found: NO
00:06:51.586  Library execinfo found: NO
00:06:51.586  Has header "execinfo.h" : YES 
00:06:51.586  Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5
00:06:51.586  Run-time dependency libarchive found: NO (tried pkgconfig)
00:06:51.586  Run-time dependency libbsd found: NO (tried pkgconfig)
00:06:51.586  Run-time dependency jansson found: NO (tried pkgconfig)
00:06:51.586  Run-time dependency openssl found: YES 3.1.1
00:06:51.586  Run-time dependency libpcap found: YES 1.10.4
00:06:51.586  Has header "pcap.h" with dependency libpcap: YES 
00:06:51.586  Compiler for C supports arguments -Wcast-qual: YES 
00:06:51.586  Compiler for C supports arguments -Wdeprecated: YES 
00:06:51.586  Compiler for C supports arguments -Wformat: YES 
00:06:51.586  Compiler for C supports arguments -Wformat-nonliteral: NO 
00:06:51.586  Compiler for C supports arguments -Wformat-security: NO 
00:06:51.586  Compiler for C supports arguments -Wmissing-declarations: YES 
00:06:51.586  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:06:51.586  Compiler for C supports arguments -Wnested-externs: YES 
00:06:51.586  Compiler for C supports arguments -Wold-style-definition: YES 
00:06:51.586  Compiler for C supports arguments -Wpointer-arith: YES 
00:06:51.586  Compiler for C supports arguments -Wsign-compare: YES 
00:06:51.586  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:06:51.586  Compiler for C supports arguments -Wundef: YES 
00:06:51.586  Compiler for C supports arguments -Wwrite-strings: YES 
00:06:51.586  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:06:51.586  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:06:51.586  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:06:51.586  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:06:51.586  Program objdump found: YES (/usr/bin/objdump)
00:06:51.586  Compiler for C supports arguments -mavx512f: YES 
00:06:51.586  Checking if "AVX512 checking" compiles: YES 
00:06:51.586  Fetching value of define "__SSE4_2__" : 1 
00:06:51.586  Fetching value of define "__AES__" : 1 
00:06:51.586  Fetching value of define "__AVX__" : 1 
00:06:51.586  Fetching value of define "__AVX2__" : 1 
00:06:51.586  Fetching value of define "__AVX512BW__" : 1 
00:06:51.586  Fetching value of define "__AVX512CD__" : 1 
00:06:51.586  Fetching value of define "__AVX512DQ__" : 1 
00:06:51.586  Fetching value of define "__AVX512F__" : 1 
00:06:51.586  Fetching value of define "__AVX512VL__" : 1 
00:06:51.586  Fetching value of define "__PCLMUL__" : 1 
00:06:51.586  Fetching value of define "__RDRND__" : 1 
00:06:51.586  Fetching value of define "__RDSEED__" : 1 
00:06:51.586  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:06:51.586  Fetching value of define "__znver1__" : (undefined) 
00:06:51.586  Fetching value of define "__znver2__" : (undefined) 
00:06:51.586  Fetching value of define "__znver3__" : (undefined) 
00:06:51.586  Fetching value of define "__znver4__" : (undefined) 
00:06:51.586  Compiler for C supports arguments -Wno-format-truncation: YES 
00:06:51.586  Message: lib/log: Defining dependency "log"
00:06:51.586  Message: lib/kvargs: Defining dependency "kvargs"
00:06:51.586  Message: lib/telemetry: Defining dependency "telemetry"
00:06:51.586  Checking for function "getentropy" : NO 
00:06:51.586  Message: lib/eal: Defining dependency "eal"
00:06:51.586  Message: lib/ring: Defining dependency "ring"
00:06:51.586  Message: lib/rcu: Defining dependency "rcu"
00:06:51.586  Message: lib/mempool: Defining dependency "mempool"
00:06:51.586  Message: lib/mbuf: Defining dependency "mbuf"
00:06:51.586  Fetching value of define "__PCLMUL__" : 1 (cached)
00:06:51.586  Fetching value of define "__AVX512F__" : 1 (cached)
00:06:51.586  Fetching value of define "__AVX512BW__" : 1 (cached)
00:06:51.586  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:06:51.586  Fetching value of define "__AVX512VL__" : 1 (cached)
00:06:51.586  Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached)
00:06:51.586  Compiler for C supports arguments -mpclmul: YES 
00:06:51.586  Compiler for C supports arguments -maes: YES 
00:06:51.586  Compiler for C supports arguments -mavx512f: YES (cached)
00:06:51.586  Compiler for C supports arguments -mavx512bw: YES 
00:06:51.586  Compiler for C supports arguments -mavx512dq: YES 
00:06:51.586  Compiler for C supports arguments -mavx512vl: YES 
00:06:51.586  Compiler for C supports arguments -mvpclmulqdq: YES 
00:06:51.586  Compiler for C supports arguments -mavx2: YES 
00:06:51.586  Compiler for C supports arguments -mavx: YES 
00:06:51.586  Message: lib/net: Defining dependency "net"
00:06:51.586  Message: lib/meter: Defining dependency "meter"
00:06:51.586  Message: lib/ethdev: Defining dependency "ethdev"
00:06:51.586  Message: lib/pci: Defining dependency "pci"
00:06:51.586  Message: lib/cmdline: Defining dependency "cmdline"
00:06:51.586  Message: lib/hash: Defining dependency "hash"
00:06:51.586  Message: lib/timer: Defining dependency "timer"
00:06:51.586  Message: lib/compressdev: Defining dependency "compressdev"
00:06:51.586  Message: lib/cryptodev: Defining dependency "cryptodev"
00:06:51.586  Message: lib/dmadev: Defining dependency "dmadev"
00:06:51.586  Compiler for C supports arguments -Wno-cast-qual: YES 
00:06:51.586  Message: lib/power: Defining dependency "power"
00:06:51.586  Message: lib/reorder: Defining dependency "reorder"
00:06:51.586  Message: lib/security: Defining dependency "security"
00:06:51.586  Has header "linux/userfaultfd.h" : YES 
00:06:51.586  Has header "linux/vduse.h" : YES 
00:06:51.586  Message: lib/vhost: Defining dependency "vhost"
00:06:51.586  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:06:51.586  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:06:51.586  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:06:51.586  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:06:51.586  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:06:51.586  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:06:51.586  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:06:51.586  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:06:51.586  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:06:51.586  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:06:51.586  Program doxygen found: YES (/usr/local/bin/doxygen)
00:06:51.586  Configuring doxy-api-html.conf using configuration
00:06:51.586  Configuring doxy-api-man.conf using configuration
00:06:51.586  Program mandb found: YES (/usr/bin/mandb)
00:06:51.586  Program sphinx-build found: NO
00:06:51.586  Configuring rte_build_config.h using configuration
00:06:51.586  Message: 
00:06:51.586  =================
00:06:51.586  Applications Enabled
00:06:51.586  =================
00:06:51.586  
00:06:51.586  apps:
00:06:51.586  	
00:06:51.586  
00:06:51.586  Message: 
00:06:51.586  =================
00:06:51.586  Libraries Enabled
00:06:51.586  =================
00:06:51.586  
00:06:51.586  libs:
00:06:51.586  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:06:51.586  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:06:51.586  	cryptodev, dmadev, power, reorder, security, vhost, 
00:06:51.586  
00:06:51.586  Message: 
00:06:51.586  ===============
00:06:51.586  Drivers Enabled
00:06:51.586  ===============
00:06:51.586  
00:06:51.586  common:
00:06:51.586  	
00:06:51.586  bus:
00:06:51.586  	pci, vdev, 
00:06:51.586  mempool:
00:06:51.586  	ring, 
00:06:51.586  dma:
00:06:51.586  	
00:06:51.586  net:
00:06:51.586  	
00:06:51.586  crypto:
00:06:51.586  	
00:06:51.586  compress:
00:06:51.586  	
00:06:51.586  vdpa:
00:06:51.586  	
00:06:51.586  
00:06:51.586  Message: 
00:06:51.586  =================
00:06:51.586  Content Skipped
00:06:51.586  =================
00:06:51.586  
00:06:51.586  apps:
00:06:51.586  	dumpcap:	explicitly disabled via build config
00:06:51.586  	graph:	explicitly disabled via build config
00:06:51.586  	pdump:	explicitly disabled via build config
00:06:51.586  	proc-info:	explicitly disabled via build config
00:06:51.586  	test-acl:	explicitly disabled via build config
00:06:51.586  	test-bbdev:	explicitly disabled via build config
00:06:51.586  	test-cmdline:	explicitly disabled via build config
00:06:51.586  	test-compress-perf:	explicitly disabled via build config
00:06:51.586  	test-crypto-perf:	explicitly disabled via build config
00:06:51.586  	test-dma-perf:	explicitly disabled via build config
00:06:51.586  	test-eventdev:	explicitly disabled via build config
00:06:51.586  	test-fib:	explicitly disabled via build config
00:06:51.586  	test-flow-perf:	explicitly disabled via build config
00:06:51.586  	test-gpudev:	explicitly disabled via build config
00:06:51.586  	test-mldev:	explicitly disabled via build config
00:06:51.586  	test-pipeline:	explicitly disabled via build config
00:06:51.586  	test-pmd:	explicitly disabled via build config
00:06:51.587  	test-regex:	explicitly disabled via build config
00:06:51.587  	test-sad:	explicitly disabled via build config
00:06:51.587  	test-security-perf:	explicitly disabled via build config
00:06:51.587  	
00:06:51.587  libs:
00:06:51.587  	argparse:	explicitly disabled via build config
00:06:51.587  	metrics:	explicitly disabled via build config
00:06:51.587  	acl:	explicitly disabled via build config
00:06:51.587  	bbdev:	explicitly disabled via build config
00:06:51.587  	bitratestats:	explicitly disabled via build config
00:06:51.587  	bpf:	explicitly disabled via build config
00:06:51.587  	cfgfile:	explicitly disabled via build config
00:06:51.587  	distributor:	explicitly disabled via build config
00:06:51.587  	efd:	explicitly disabled via build config
00:06:51.587  	eventdev:	explicitly disabled via build config
00:06:51.587  	dispatcher:	explicitly disabled via build config
00:06:51.587  	gpudev:	explicitly disabled via build config
00:06:51.587  	gro:	explicitly disabled via build config
00:06:51.587  	gso:	explicitly disabled via build config
00:06:51.587  	ip_frag:	explicitly disabled via build config
00:06:51.587  	jobstats:	explicitly disabled via build config
00:06:51.587  	latencystats:	explicitly disabled via build config
00:06:51.587  	lpm:	explicitly disabled via build config
00:06:51.587  	member:	explicitly disabled via build config
00:06:51.587  	pcapng:	explicitly disabled via build config
00:06:51.587  	rawdev:	explicitly disabled via build config
00:06:51.587  	regexdev:	explicitly disabled via build config
00:06:51.587  	mldev:	explicitly disabled via build config
00:06:51.587  	rib:	explicitly disabled via build config
00:06:51.587  	sched:	explicitly disabled via build config
00:06:51.587  	stack:	explicitly disabled via build config
00:06:51.587  	ipsec:	explicitly disabled via build config
00:06:51.587  	pdcp:	explicitly disabled via build config
00:06:51.587  	fib:	explicitly disabled via build config
00:06:51.587  	port:	explicitly disabled via build config
00:06:51.587  	pdump:	explicitly disabled via build config
00:06:51.587  	table:	explicitly disabled via build config
00:06:51.587  	pipeline:	explicitly disabled via build config
00:06:51.587  	graph:	explicitly disabled via build config
00:06:51.587  	node:	explicitly disabled via build config
00:06:51.587  	
00:06:51.587  drivers:
00:06:51.587  	common/cpt:	not in enabled drivers build config
00:06:51.587  	common/dpaax:	not in enabled drivers build config
00:06:51.587  	common/iavf:	not in enabled drivers build config
00:06:51.587  	common/idpf:	not in enabled drivers build config
00:06:51.587  	common/ionic:	not in enabled drivers build config
00:06:51.587  	common/mvep:	not in enabled drivers build config
00:06:51.587  	common/octeontx:	not in enabled drivers build config
00:06:51.587  	bus/auxiliary:	not in enabled drivers build config
00:06:51.587  	bus/cdx:	not in enabled drivers build config
00:06:51.587  	bus/dpaa:	not in enabled drivers build config
00:06:51.587  	bus/fslmc:	not in enabled drivers build config
00:06:51.587  	bus/ifpga:	not in enabled drivers build config
00:06:51.587  	bus/platform:	not in enabled drivers build config
00:06:51.587  	bus/uacce:	not in enabled drivers build config
00:06:51.587  	bus/vmbus:	not in enabled drivers build config
00:06:51.587  	common/cnxk:	not in enabled drivers build config
00:06:51.587  	common/mlx5:	not in enabled drivers build config
00:06:51.587  	common/nfp:	not in enabled drivers build config
00:06:51.587  	common/nitrox:	not in enabled drivers build config
00:06:51.587  	common/qat:	not in enabled drivers build config
00:06:51.587  	common/sfc_efx:	not in enabled drivers build config
00:06:51.587  	mempool/bucket:	not in enabled drivers build config
00:06:51.587  	mempool/cnxk:	not in enabled drivers build config
00:06:51.587  	mempool/dpaa:	not in enabled drivers build config
00:06:51.587  	mempool/dpaa2:	not in enabled drivers build config
00:06:51.587  	mempool/octeontx:	not in enabled drivers build config
00:06:51.587  	mempool/stack:	not in enabled drivers build config
00:06:51.587  	dma/cnxk:	not in enabled drivers build config
00:06:51.587  	dma/dpaa:	not in enabled drivers build config
00:06:51.587  	dma/dpaa2:	not in enabled drivers build config
00:06:51.587  	dma/hisilicon:	not in enabled drivers build config
00:06:51.587  	dma/idxd:	not in enabled drivers build config
00:06:51.587  	dma/ioat:	not in enabled drivers build config
00:06:51.587  	dma/skeleton:	not in enabled drivers build config
00:06:51.587  	net/af_packet:	not in enabled drivers build config
00:06:51.587  	net/af_xdp:	not in enabled drivers build config
00:06:51.587  	net/ark:	not in enabled drivers build config
00:06:51.587  	net/atlantic:	not in enabled drivers build config
00:06:51.587  	net/avp:	not in enabled drivers build config
00:06:51.587  	net/axgbe:	not in enabled drivers build config
00:06:51.587  	net/bnx2x:	not in enabled drivers build config
00:06:51.587  	net/bnxt:	not in enabled drivers build config
00:06:51.587  	net/bonding:	not in enabled drivers build config
00:06:51.587  	net/cnxk:	not in enabled drivers build config
00:06:51.587  	net/cpfl:	not in enabled drivers build config
00:06:51.587  	net/cxgbe:	not in enabled drivers build config
00:06:51.587  	net/dpaa:	not in enabled drivers build config
00:06:51.587  	net/dpaa2:	not in enabled drivers build config
00:06:51.587  	net/e1000:	not in enabled drivers build config
00:06:51.587  	net/ena:	not in enabled drivers build config
00:06:51.587  	net/enetc:	not in enabled drivers build config
00:06:51.587  	net/enetfec:	not in enabled drivers build config
00:06:51.587  	net/enic:	not in enabled drivers build config
00:06:51.587  	net/failsafe:	not in enabled drivers build config
00:06:51.587  	net/fm10k:	not in enabled drivers build config
00:06:51.587  	net/gve:	not in enabled drivers build config
00:06:51.587  	net/hinic:	not in enabled drivers build config
00:06:51.587  	net/hns3:	not in enabled drivers build config
00:06:51.587  	net/i40e:	not in enabled drivers build config
00:06:51.587  	net/iavf:	not in enabled drivers build config
00:06:51.587  	net/ice:	not in enabled drivers build config
00:06:51.587  	net/idpf:	not in enabled drivers build config
00:06:51.587  	net/igc:	not in enabled drivers build config
00:06:51.587  	net/ionic:	not in enabled drivers build config
00:06:51.587  	net/ipn3ke:	not in enabled drivers build config
00:06:51.587  	net/ixgbe:	not in enabled drivers build config
00:06:51.587  	net/mana:	not in enabled drivers build config
00:06:51.587  	net/memif:	not in enabled drivers build config
00:06:51.587  	net/mlx4:	not in enabled drivers build config
00:06:51.587  	net/mlx5:	not in enabled drivers build config
00:06:51.587  	net/mvneta:	not in enabled drivers build config
00:06:51.587  	net/mvpp2:	not in enabled drivers build config
00:06:51.587  	net/netvsc:	not in enabled drivers build config
00:06:51.587  	net/nfb:	not in enabled drivers build config
00:06:51.587  	net/nfp:	not in enabled drivers build config
00:06:51.587  	net/ngbe:	not in enabled drivers build config
00:06:51.587  	net/null:	not in enabled drivers build config
00:06:51.587  	net/octeontx:	not in enabled drivers build config
00:06:51.587  	net/octeon_ep:	not in enabled drivers build config
00:06:51.587  	net/pcap:	not in enabled drivers build config
00:06:51.587  	net/pfe:	not in enabled drivers build config
00:06:51.587  	net/qede:	not in enabled drivers build config
00:06:51.587  	net/ring:	not in enabled drivers build config
00:06:51.587  	net/sfc:	not in enabled drivers build config
00:06:51.587  	net/softnic:	not in enabled drivers build config
00:06:51.587  	net/tap:	not in enabled drivers build config
00:06:51.587  	net/thunderx:	not in enabled drivers build config
00:06:51.587  	net/txgbe:	not in enabled drivers build config
00:06:51.587  	net/vdev_netvsc:	not in enabled drivers build config
00:06:51.587  	net/vhost:	not in enabled drivers build config
00:06:51.587  	net/virtio:	not in enabled drivers build config
00:06:51.587  	net/vmxnet3:	not in enabled drivers build config
00:06:51.587  	raw/*:	missing internal dependency, "rawdev"
00:06:51.587  	crypto/armv8:	not in enabled drivers build config
00:06:51.587  	crypto/bcmfs:	not in enabled drivers build config
00:06:51.587  	crypto/caam_jr:	not in enabled drivers build config
00:06:51.587  	crypto/ccp:	not in enabled drivers build config
00:06:51.587  	crypto/cnxk:	not in enabled drivers build config
00:06:51.587  	crypto/dpaa_sec:	not in enabled drivers build config
00:06:51.587  	crypto/dpaa2_sec:	not in enabled drivers build config
00:06:51.587  	crypto/ipsec_mb:	not in enabled drivers build config
00:06:51.587  	crypto/mlx5:	not in enabled drivers build config
00:06:51.587  	crypto/mvsam:	not in enabled drivers build config
00:06:51.587  	crypto/nitrox:	not in enabled drivers build config
00:06:51.587  	crypto/null:	not in enabled drivers build config
00:06:51.587  	crypto/octeontx:	not in enabled drivers build config
00:06:51.587  	crypto/openssl:	not in enabled drivers build config
00:06:51.587  	crypto/scheduler:	not in enabled drivers build config
00:06:51.587  	crypto/uadk:	not in enabled drivers build config
00:06:51.587  	crypto/virtio:	not in enabled drivers build config
00:06:51.587  	compress/isal:	not in enabled drivers build config
00:06:51.587  	compress/mlx5:	not in enabled drivers build config
00:06:51.587  	compress/nitrox:	not in enabled drivers build config
00:06:51.587  	compress/octeontx:	not in enabled drivers build config
00:06:51.587  	compress/zlib:	not in enabled drivers build config
00:06:51.587  	regex/*:	missing internal dependency, "regexdev"
00:06:51.587  	ml/*:	missing internal dependency, "mldev"
00:06:51.587  	vdpa/ifc:	not in enabled drivers build config
00:06:51.587  	vdpa/mlx5:	not in enabled drivers build config
00:06:51.587  	vdpa/nfp:	not in enabled drivers build config
00:06:51.587  	vdpa/sfc:	not in enabled drivers build config
00:06:51.587  	event/*:	missing internal dependency, "eventdev"
00:06:51.587  	baseband/*:	missing internal dependency, "bbdev"
00:06:51.587  	gpu/*:	missing internal dependency, "gpudev"
00:06:51.587  	
00:06:51.587  
00:06:51.587  Build targets in project: 85
00:06:51.587  
00:06:51.587  DPDK 24.03.0
00:06:51.587  
00:06:51.587    User defined options
00:06:51.587      buildtype          : debug
00:06:51.587      default_library    : shared
00:06:51.587      libdir             : lib
00:06:51.587      prefix             : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build
00:06:51.587      c_args             : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 
00:06:51.587      c_link_args        : 
00:06:51.587      cpu_instruction_set: native
00:06:51.587      disable_apps       : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump
00:06:51.587      disable_libs       : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump
00:06:51.587      enable_docs        : false
00:06:51.587      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm
00:06:51.587      enable_kmods       : false
00:06:51.587      max_lcores         : 128
00:06:51.587      tests              : false
00:06:51.588  
00:06:51.588  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:06:51.847  ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp'
00:06:52.110  [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:06:52.110  [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:06:52.110  [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o
00:06:52.110  [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:06:52.110  [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:06:52.110  [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:06:52.110  [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:06:52.110  [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:06:52.110  [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:06:52.110  [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:06:52.110  [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:06:52.110  [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:06:52.110  [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:06:52.110  [14/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:06:52.110  [15/268] Linking static target lib/librte_kvargs.a
00:06:52.110  [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:06:52.110  [17/268] Compiling C object lib/librte_log.a.p/log_log.c.o
00:06:52.375  [18/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:06:52.375  [19/268] Linking static target lib/librte_log.a
00:06:52.375  [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:06:52.375  [21/268] Linking static target lib/librte_pci.a
00:06:52.375  [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:06:52.375  [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:06:52.375  [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:06:52.637  [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:06:52.637  [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:06:52.637  [27/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:06:52.637  [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:06:52.637  [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:06:52.637  [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:06:52.637  [31/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:06:52.638  [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:06:52.638  [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:06:52.638  [34/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:06:52.638  [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:06:52.638  [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:06:52.638  [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:06:52.638  [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:06:52.638  [39/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:06:52.638  [40/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:06:52.638  [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:06:52.638  [42/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:06:52.638  [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:06:52.638  [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:06:52.638  [45/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:06:52.638  [46/268] Linking static target lib/net/libnet_crc_avx512_lib.a
00:06:52.638  [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:06:52.638  [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:06:52.638  [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:06:52.638  [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:06:52.638  [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:06:52.638  [52/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:06:52.638  [53/268] Linking static target lib/librte_ring.a
00:06:52.638  [54/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:06:52.638  [55/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:06:52.638  [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:06:52.638  [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:06:52.638  [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:06:52.638  [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:06:52.638  [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:06:52.638  [61/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:06:52.638  [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:06:52.638  [63/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:06:52.638  [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:06:52.638  [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:06:52.638  [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:06:52.638  [67/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:06:52.638  [68/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:06:52.638  [69/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:06:52.638  [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:06:52.638  [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:06:52.899  [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:06:52.899  [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:06:52.899  [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:06:52.899  [75/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:06:52.899  [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:06:52.899  [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:06:52.899  [78/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:06:52.899  [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:06:52.899  [80/268] Linking static target lib/librte_telemetry.a
00:06:52.899  [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:06:52.899  [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:06:52.899  [83/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:06:52.899  [84/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:06:52.899  [85/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:06:52.899  [86/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:06:52.899  [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:06:52.899  [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:06:52.900  [89/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:06:52.900  [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:06:52.900  [91/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:06:52.900  [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:06:52.900  [93/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:06:52.900  [94/268] Linking static target lib/librte_meter.a
00:06:52.900  [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:06:52.900  [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:06:52.900  [97/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:06:52.900  [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:06:52.900  [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:06:52.900  [100/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:06:52.900  [101/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:06:52.900  [102/268] Linking static target lib/librte_net.a
00:06:52.900  [103/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:06:52.900  [104/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:06:52.900  [105/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:06:52.900  [106/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:06:52.900  [107/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:06:52.900  [108/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:06:52.900  [109/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:06:52.900  [110/268] Linking static target lib/librte_mempool.a
00:06:52.900  [111/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:06:52.900  [112/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:06:52.900  [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:06:52.900  [114/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:06:52.900  [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:06:52.900  [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:06:52.900  [117/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:06:52.900  [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:06:52.900  [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:06:52.900  [120/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:06:52.900  [121/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:06:52.900  [122/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:06:52.900  [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:06:52.900  [124/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:06:52.900  [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:06:52.900  [126/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:06:52.900  [127/268] Linking static target lib/librte_rcu.a
00:06:52.900  [128/268] Linking static target lib/librte_cmdline.a
00:06:52.900  [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:06:52.900  [130/268] Linking static target lib/librte_eal.a
00:06:53.158  [131/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:06:53.159  [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:06:53.159  [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:06:53.159  [134/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:06:53.159  [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:06:53.159  [136/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:06:53.159  [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:06:53.159  [138/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:06:53.159  [139/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:06:53.159  [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:06:53.159  [141/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:06:53.159  [142/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:06:53.159  [143/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:06:53.159  [144/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:06:53.159  [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:06:53.159  [146/268] Linking static target lib/librte_mbuf.a
00:06:53.159  [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:06:53.159  [148/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:06:53.159  [149/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:06:53.159  [150/268] Linking target lib/librte_log.so.24.1
00:06:53.159  [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:06:53.159  [152/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:06:53.159  [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:06:53.159  [154/268] Linking static target lib/librte_timer.a
00:06:53.159  [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:06:53.159  [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:06:53.159  [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:06:53.159  [158/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:06:53.159  [159/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o
00:06:53.159  [160/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o
00:06:53.159  [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:06:53.159  [162/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:06:53.159  [163/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:06:53.159  [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:06:53.159  [165/268] Linking static target lib/librte_compressdev.a
00:06:53.434  [166/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:06:53.434  [167/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:06:53.434  [168/268] Linking static target lib/librte_security.a
00:06:53.434  [169/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:06:53.434  [170/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:06:53.434  [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:06:53.434  [172/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:06:53.434  [173/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:06:53.434  [174/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:06:53.434  [175/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:06:53.434  [176/268] Linking static target lib/librte_dmadev.a
00:06:53.434  [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:06:53.434  [178/268] Linking static target drivers/libtmp_rte_bus_vdev.a
00:06:53.434  [179/268] Linking static target drivers/libtmp_rte_bus_pci.a
00:06:53.434  [180/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:06:53.434  [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:06:53.434  [182/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:06:53.434  [183/268] Linking static target lib/librte_power.a
00:06:53.434  [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:06:53.434  [185/268] Linking static target lib/librte_reorder.a
00:06:53.434  [186/268] Linking target lib/librte_telemetry.so.24.1
00:06:53.434  [187/268] Linking target lib/librte_kvargs.so.24.1
00:06:53.434  [188/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:06:53.434  [189/268] Linking static target drivers/libtmp_rte_mempool_ring.a
00:06:53.434  [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:06:53.434  [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:06:53.434  [192/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:06:53.434  [193/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:06:53.434  [194/268] Linking static target lib/librte_hash.a
00:06:53.434  [195/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:06:53.434  [196/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:06:53.434  [197/268] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:06:53.693  [198/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:06:53.693  [199/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:06:53.693  [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:06:53.693  [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:06:53.693  [202/268] Linking static target drivers/librte_bus_vdev.a
00:06:53.693  [203/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:06:53.693  [204/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:06:53.693  [205/268] Linking static target drivers/librte_bus_pci.a
00:06:53.693  [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:06:53.693  [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:06:53.693  [208/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:06:53.693  [209/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:06:53.693  [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:06:53.693  [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:06:53.693  [212/268] Linking static target drivers/librte_mempool_ring.a
00:06:53.693  [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:06:53.693  [214/268] Linking static target lib/librte_cryptodev.a
00:06:53.951  [215/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:06:53.951  [216/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:06:53.951  [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:06:53.951  [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:06:53.951  [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:06:53.951  [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:06:54.208  [221/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:06:54.208  [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:06:54.208  [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:06:54.208  [224/268] Linking static target lib/librte_ethdev.a
00:06:54.208  [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:06:54.466  [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:06:54.466  [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:06:55.400  [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:06:55.400  [229/268] Linking static target lib/librte_vhost.a
00:06:55.658  [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:06:57.032  [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:07:02.300  [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:07:03.236  [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:07:03.236  [234/268] Linking target lib/librte_eal.so.24.1
00:07:03.236  [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:07:03.495  [236/268] Linking target lib/librte_pci.so.24.1
00:07:03.495  [237/268] Linking target lib/librte_ring.so.24.1
00:07:03.495  [238/268] Linking target lib/librte_meter.so.24.1
00:07:03.495  [239/268] Linking target drivers/librte_bus_vdev.so.24.1
00:07:03.495  [240/268] Linking target lib/librte_timer.so.24.1
00:07:03.495  [241/268] Linking target lib/librte_dmadev.so.24.1
00:07:03.495  [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols
00:07:03.495  [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:07:03.495  [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:07:03.495  [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:07:03.495  [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols
00:07:03.495  [247/268] Linking target lib/librte_mempool.so.24.1
00:07:03.495  [248/268] Linking target lib/librte_rcu.so.24.1
00:07:03.495  [249/268] Linking target drivers/librte_bus_pci.so.24.1
00:07:03.753  [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:07:03.753  [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:07:03.753  [252/268] Linking target drivers/librte_mempool_ring.so.24.1
00:07:03.753  [253/268] Linking target lib/librte_mbuf.so.24.1
00:07:03.753  [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:07:04.012  [255/268] Linking target lib/librte_compressdev.so.24.1
00:07:04.012  [256/268] Linking target lib/librte_reorder.so.24.1
00:07:04.012  [257/268] Linking target lib/librte_net.so.24.1
00:07:04.012  [258/268] Linking target lib/librte_cryptodev.so.24.1
00:07:04.012  [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:07:04.012  [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:07:04.012  [261/268] Linking target lib/librte_security.so.24.1
00:07:04.012  [262/268] Linking target lib/librte_hash.so.24.1
00:07:04.012  [263/268] Linking target lib/librte_cmdline.so.24.1
00:07:04.012  [264/268] Linking target lib/librte_ethdev.so.24.1
00:07:04.270  [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols
00:07:04.270  [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols
00:07:04.270  [267/268] Linking target lib/librte_power.so.24.1
00:07:04.270  [268/268] Linking target lib/librte_vhost.so.24.1
00:07:04.270  INFO: autodetecting backend as ninja
00:07:04.270  INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96
00:07:16.475    CC lib/log/log.o
00:07:16.475    CC lib/log/log_flags.o
00:07:16.475    CC lib/log/log_deprecated.o
00:07:16.475    CC lib/ut_mock/mock.o
00:07:16.475    CC lib/ut/ut.o
00:07:16.475    LIB libspdk_ut.a
00:07:16.475    LIB libspdk_log.a
00:07:16.475    LIB libspdk_ut_mock.a
00:07:16.475    SO libspdk_ut.so.2.0
00:07:16.475    SO libspdk_log.so.7.1
00:07:16.475    SO libspdk_ut_mock.so.6.0
00:07:16.475    SYMLINK libspdk_ut.so
00:07:16.475    SYMLINK libspdk_log.so
00:07:16.475    SYMLINK libspdk_ut_mock.so
00:07:16.475    CC lib/util/base64.o
00:07:16.475    CC lib/ioat/ioat.o
00:07:16.475    CC lib/util/cpuset.o
00:07:16.475    CC lib/util/bit_array.o
00:07:16.475    CC lib/util/crc16.o
00:07:16.475    CC lib/util/crc32c.o
00:07:16.475    CC lib/util/crc32.o
00:07:16.475    CC lib/util/crc32_ieee.o
00:07:16.475    CC lib/dma/dma.o
00:07:16.475    CC lib/util/crc64.o
00:07:16.475    CC lib/util/dif.o
00:07:16.475    CXX lib/trace_parser/trace.o
00:07:16.475    CC lib/util/fd.o
00:07:16.475    CC lib/util/fd_group.o
00:07:16.475    CC lib/util/file.o
00:07:16.475    CC lib/util/hexlify.o
00:07:16.475    CC lib/util/iov.o
00:07:16.475    CC lib/util/math.o
00:07:16.475    CC lib/util/net.o
00:07:16.475    CC lib/util/pipe.o
00:07:16.475    CC lib/util/strerror_tls.o
00:07:16.475    CC lib/util/string.o
00:07:16.475    CC lib/util/uuid.o
00:07:16.475    CC lib/util/xor.o
00:07:16.475    CC lib/util/zipf.o
00:07:16.475    CC lib/util/md5.o
00:07:16.475    CC lib/vfio_user/host/vfio_user_pci.o
00:07:16.475    CC lib/vfio_user/host/vfio_user.o
00:07:16.475    LIB libspdk_dma.a
00:07:16.475    SO libspdk_dma.so.5.0
00:07:16.475    LIB libspdk_ioat.a
00:07:16.475    SYMLINK libspdk_dma.so
00:07:16.475    SO libspdk_ioat.so.7.0
00:07:16.475    SYMLINK libspdk_ioat.so
00:07:16.475    LIB libspdk_vfio_user.a
00:07:16.475    SO libspdk_vfio_user.so.5.0
00:07:16.475    LIB libspdk_util.a
00:07:16.475    SYMLINK libspdk_vfio_user.so
00:07:16.475    SO libspdk_util.so.10.1
00:07:16.475    SYMLINK libspdk_util.so
00:07:16.475    LIB libspdk_trace_parser.a
00:07:16.475    SO libspdk_trace_parser.so.6.0
00:07:16.475    SYMLINK libspdk_trace_parser.so
00:07:16.475    CC lib/json/json_parse.o
00:07:16.475    CC lib/rdma_utils/rdma_utils.o
00:07:16.475    CC lib/json/json_util.o
00:07:16.475    CC lib/json/json_write.o
00:07:16.475    CC lib/conf/conf.o
00:07:16.475    CC lib/env_dpdk/env.o
00:07:16.475    CC lib/env_dpdk/memory.o
00:07:16.475    CC lib/vmd/vmd.o
00:07:16.475    CC lib/env_dpdk/pci.o
00:07:16.475    CC lib/idxd/idxd.o
00:07:16.475    CC lib/vmd/led.o
00:07:16.475    CC lib/env_dpdk/init.o
00:07:16.475    CC lib/idxd/idxd_user.o
00:07:16.475    CC lib/env_dpdk/threads.o
00:07:16.475    CC lib/idxd/idxd_kernel.o
00:07:16.475    CC lib/env_dpdk/pci_ioat.o
00:07:16.475    CC lib/env_dpdk/pci_virtio.o
00:07:16.475    CC lib/env_dpdk/pci_vmd.o
00:07:16.475    CC lib/env_dpdk/pci_idxd.o
00:07:16.475    CC lib/env_dpdk/pci_event.o
00:07:16.475    CC lib/env_dpdk/sigbus_handler.o
00:07:16.475    CC lib/env_dpdk/pci_dpdk.o
00:07:16.475    CC lib/env_dpdk/pci_dpdk_2207.o
00:07:16.475    CC lib/env_dpdk/pci_dpdk_2211.o
00:07:16.732    LIB libspdk_conf.a
00:07:16.732    LIB libspdk_rdma_utils.a
00:07:16.732    LIB libspdk_json.a
00:07:16.732    SO libspdk_conf.so.6.0
00:07:16.732    SO libspdk_rdma_utils.so.1.0
00:07:16.732    SO libspdk_json.so.6.0
00:07:16.732    SYMLINK libspdk_conf.so
00:07:16.732    SYMLINK libspdk_rdma_utils.so
00:07:16.732    SYMLINK libspdk_json.so
00:07:16.990    LIB libspdk_idxd.a
00:07:16.990    SO libspdk_idxd.so.12.1
00:07:16.990    LIB libspdk_vmd.a
00:07:16.990    SYMLINK libspdk_idxd.so
00:07:16.990    SO libspdk_vmd.so.6.0
00:07:16.990    CC lib/rdma_provider/common.o
00:07:16.990    CC lib/rdma_provider/rdma_provider_verbs.o
00:07:16.990    SYMLINK libspdk_vmd.so
00:07:17.248    CC lib/jsonrpc/jsonrpc_server.o
00:07:17.248    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:07:17.248    CC lib/jsonrpc/jsonrpc_client.o
00:07:17.248    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:07:17.248    LIB libspdk_rdma_provider.a
00:07:17.248    SO libspdk_rdma_provider.so.7.0
00:07:17.248    LIB libspdk_jsonrpc.a
00:07:17.506    SYMLINK libspdk_rdma_provider.so
00:07:17.506    SO libspdk_jsonrpc.so.6.0
00:07:17.506    SYMLINK libspdk_jsonrpc.so
00:07:17.506    LIB libspdk_env_dpdk.a
00:07:17.506    SO libspdk_env_dpdk.so.15.1
00:07:17.764    SYMLINK libspdk_env_dpdk.so
00:07:17.764    CC lib/rpc/rpc.o
00:07:18.022    LIB libspdk_rpc.a
00:07:18.022    SO libspdk_rpc.so.6.0
00:07:18.022    SYMLINK libspdk_rpc.so
00:07:18.280    CC lib/trace/trace.o
00:07:18.280    CC lib/trace/trace_flags.o
00:07:18.280    CC lib/keyring/keyring.o
00:07:18.280    CC lib/trace/trace_rpc.o
00:07:18.280    CC lib/keyring/keyring_rpc.o
00:07:18.280    CC lib/notify/notify.o
00:07:18.280    CC lib/notify/notify_rpc.o
00:07:18.540    LIB libspdk_notify.a
00:07:18.540    SO libspdk_notify.so.6.0
00:07:18.540    LIB libspdk_trace.a
00:07:18.540    LIB libspdk_keyring.a
00:07:18.540    SO libspdk_trace.so.11.0
00:07:18.540    SO libspdk_keyring.so.2.0
00:07:18.540    SYMLINK libspdk_notify.so
00:07:18.540    SYMLINK libspdk_trace.so
00:07:18.540    SYMLINK libspdk_keyring.so
00:07:19.106    CC lib/thread/thread.o
00:07:19.106    CC lib/thread/iobuf.o
00:07:19.106    CC lib/sock/sock.o
00:07:19.106    CC lib/sock/sock_rpc.o
00:07:19.366    LIB libspdk_sock.a
00:07:19.366    SO libspdk_sock.so.10.0
00:07:19.366    SYMLINK libspdk_sock.so
00:07:19.629    CC lib/nvme/nvme_ctrlr_cmd.o
00:07:19.629    CC lib/nvme/nvme_ctrlr.o
00:07:19.629    CC lib/nvme/nvme_fabric.o
00:07:19.629    CC lib/nvme/nvme_ns.o
00:07:19.629    CC lib/nvme/nvme_ns_cmd.o
00:07:19.629    CC lib/nvme/nvme_pcie_common.o
00:07:19.629    CC lib/nvme/nvme_pcie.o
00:07:19.629    CC lib/nvme/nvme_qpair.o
00:07:19.629    CC lib/nvme/nvme.o
00:07:19.629    CC lib/nvme/nvme_quirks.o
00:07:19.629    CC lib/nvme/nvme_transport.o
00:07:19.629    CC lib/nvme/nvme_discovery.o
00:07:19.629    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:07:19.629    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:07:19.629    CC lib/nvme/nvme_tcp.o
00:07:19.629    CC lib/nvme/nvme_opal.o
00:07:19.629    CC lib/nvme/nvme_io_msg.o
00:07:19.629    CC lib/nvme/nvme_poll_group.o
00:07:19.629    CC lib/nvme/nvme_zns.o
00:07:19.629    CC lib/nvme/nvme_stubs.o
00:07:19.629    CC lib/nvme/nvme_auth.o
00:07:19.629    CC lib/nvme/nvme_cuse.o
00:07:19.629    CC lib/nvme/nvme_vfio_user.o
00:07:19.629    CC lib/nvme/nvme_rdma.o
00:07:19.886    LIB libspdk_thread.a
00:07:20.144    SO libspdk_thread.so.11.0
00:07:20.144    SYMLINK libspdk_thread.so
00:07:20.402    CC lib/virtio/virtio_vhost_user.o
00:07:20.402    CC lib/virtio/virtio.o
00:07:20.402    CC lib/virtio/virtio_vfio_user.o
00:07:20.402    CC lib/accel/accel.o
00:07:20.402    CC lib/virtio/virtio_pci.o
00:07:20.402    CC lib/accel/accel_rpc.o
00:07:20.402    CC lib/accel/accel_sw.o
00:07:20.402    CC lib/blob/blobstore.o
00:07:20.402    CC lib/blob/request.o
00:07:20.402    CC lib/blob/zeroes.o
00:07:20.402    CC lib/blob/blob_bs_dev.o
00:07:20.402    CC lib/init/json_config.o
00:07:20.402    CC lib/init/subsystem.o
00:07:20.402    CC lib/vfu_tgt/tgt_endpoint.o
00:07:20.402    CC lib/vfu_tgt/tgt_rpc.o
00:07:20.402    CC lib/init/subsystem_rpc.o
00:07:20.402    CC lib/init/rpc.o
00:07:20.402    CC lib/fsdev/fsdev.o
00:07:20.402    CC lib/fsdev/fsdev_io.o
00:07:20.402    CC lib/fsdev/fsdev_rpc.o
00:07:20.660    LIB libspdk_init.a
00:07:20.660    SO libspdk_init.so.6.0
00:07:20.660    LIB libspdk_virtio.a
00:07:20.660    LIB libspdk_vfu_tgt.a
00:07:20.660    SYMLINK libspdk_init.so
00:07:20.660    SO libspdk_virtio.so.7.0
00:07:20.660    SO libspdk_vfu_tgt.so.3.0
00:07:20.918    SYMLINK libspdk_virtio.so
00:07:20.918    SYMLINK libspdk_vfu_tgt.so
00:07:20.918    LIB libspdk_fsdev.a
00:07:20.918    SO libspdk_fsdev.so.2.0
00:07:20.918    CC lib/event/app.o
00:07:20.918    CC lib/event/reactor.o
00:07:20.918    CC lib/event/log_rpc.o
00:07:20.918    CC lib/event/app_rpc.o
00:07:20.918    CC lib/event/scheduler_static.o
00:07:21.176    SYMLINK libspdk_fsdev.so
00:07:21.176    LIB libspdk_accel.a
00:07:21.176    SO libspdk_accel.so.16.0
00:07:21.434    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:07:21.434    SYMLINK libspdk_accel.so
00:07:21.434    LIB libspdk_event.a
00:07:21.434    LIB libspdk_nvme.a
00:07:21.434    SO libspdk_event.so.14.0
00:07:21.434    SYMLINK libspdk_event.so
00:07:21.434    SO libspdk_nvme.so.15.0
00:07:21.691    CC lib/bdev/bdev.o
00:07:21.691    CC lib/bdev/bdev_rpc.o
00:07:21.691    CC lib/bdev/bdev_zone.o
00:07:21.691    CC lib/bdev/part.o
00:07:21.691    CC lib/bdev/scsi_nvme.o
00:07:21.691    SYMLINK libspdk_nvme.so
00:07:21.691    LIB libspdk_fuse_dispatcher.a
00:07:21.950    SO libspdk_fuse_dispatcher.so.1.0
00:07:21.950    SYMLINK libspdk_fuse_dispatcher.so
00:07:22.517    LIB libspdk_blob.a
00:07:22.775    SO libspdk_blob.so.12.0
00:07:22.775    SYMLINK libspdk_blob.so
00:07:23.034    CC lib/blobfs/blobfs.o
00:07:23.034    CC lib/blobfs/tree.o
00:07:23.034    CC lib/lvol/lvol.o
00:07:23.601    LIB libspdk_bdev.a
00:07:23.601    SO libspdk_bdev.so.17.0
00:07:23.601    LIB libspdk_blobfs.a
00:07:23.601    SYMLINK libspdk_bdev.so
00:07:23.601    SO libspdk_blobfs.so.11.0
00:07:23.601    LIB libspdk_lvol.a
00:07:23.601    SO libspdk_lvol.so.11.0
00:07:23.601    SYMLINK libspdk_blobfs.so
00:07:23.859    SYMLINK libspdk_lvol.so
00:07:23.859    CC lib/nvmf/ctrlr.o
00:07:23.859    CC lib/nvmf/ctrlr_discovery.o
00:07:23.859    CC lib/nvmf/ctrlr_bdev.o
00:07:23.859    CC lib/nvmf/subsystem.o
00:07:23.859    CC lib/nvmf/nvmf.o
00:07:23.859    CC lib/nvmf/transport.o
00:07:23.859    CC lib/nvmf/nvmf_rpc.o
00:07:23.859    CC lib/nvmf/tcp.o
00:07:23.859    CC lib/nvmf/stubs.o
00:07:23.859    CC lib/nvmf/vfio_user.o
00:07:23.859    CC lib/nvmf/mdns_server.o
00:07:23.859    CC lib/nvmf/rdma.o
00:07:23.859    CC lib/nbd/nbd.o
00:07:23.859    CC lib/nvmf/auth.o
00:07:23.859    CC lib/ublk/ublk.o
00:07:23.859    CC lib/nbd/nbd_rpc.o
00:07:23.859    CC lib/scsi/dev.o
00:07:23.859    CC lib/ublk/ublk_rpc.o
00:07:23.859    CC lib/scsi/lun.o
00:07:23.859    CC lib/scsi/port.o
00:07:23.859    CC lib/scsi/scsi.o
00:07:23.859    CC lib/scsi/scsi_bdev.o
00:07:23.859    CC lib/ftl/ftl_core.o
00:07:23.859    CC lib/scsi/scsi_pr.o
00:07:23.859    CC lib/ftl/ftl_init.o
00:07:23.860    CC lib/ftl/ftl_layout.o
00:07:23.860    CC lib/scsi/scsi_rpc.o
00:07:23.860    CC lib/ftl/ftl_debug.o
00:07:23.860    CC lib/scsi/task.o
00:07:23.860    CC lib/ftl/ftl_io.o
00:07:23.860    CC lib/ftl/ftl_sb.o
00:07:23.860    CC lib/ftl/ftl_l2p.o
00:07:23.860    CC lib/ftl/ftl_l2p_flat.o
00:07:23.860    CC lib/ftl/ftl_nv_cache.o
00:07:23.860    CC lib/ftl/ftl_band.o
00:07:23.860    CC lib/ftl/ftl_band_ops.o
00:07:23.860    CC lib/ftl/ftl_writer.o
00:07:23.860    CC lib/ftl/ftl_rq.o
00:07:23.860    CC lib/ftl/ftl_l2p_cache.o
00:07:23.860    CC lib/ftl/ftl_reloc.o
00:07:23.860    CC lib/ftl/ftl_p2l.o
00:07:23.860    CC lib/ftl/ftl_p2l_log.o
00:07:23.860    CC lib/ftl/mngt/ftl_mngt.o
00:07:23.860    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:07:24.120    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:07:24.120    CC lib/ftl/mngt/ftl_mngt_startup.o
00:07:24.120    CC lib/ftl/mngt/ftl_mngt_misc.o
00:07:24.120    CC lib/ftl/mngt/ftl_mngt_md.o
00:07:24.120    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:07:24.120    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:07:24.120    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:07:24.120    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:07:24.120    CC lib/ftl/mngt/ftl_mngt_band.o
00:07:24.120    CC lib/ftl/utils/ftl_conf.o
00:07:24.120    CC lib/ftl/utils/ftl_md.o
00:07:24.120    CC lib/ftl/utils/ftl_mempool.o
00:07:24.120    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:07:24.120    CC lib/ftl/utils/ftl_bitmap.o
00:07:24.120    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:07:24.120    CC lib/ftl/utils/ftl_property.o
00:07:24.121    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:07:24.121    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:07:24.121    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:07:24.121    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:07:24.121    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:07:24.121    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:07:24.121    CC lib/ftl/upgrade/ftl_sb_v3.o
00:07:24.121    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:07:24.121    CC lib/ftl/upgrade/ftl_sb_v5.o
00:07:24.121    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:07:24.121    CC lib/ftl/nvc/ftl_nvc_dev.o
00:07:24.121    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:07:24.121    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:07:24.121    CC lib/ftl/ftl_trace.o
00:07:24.121    CC lib/ftl/base/ftl_base_dev.o
00:07:24.121    CC lib/ftl/base/ftl_base_bdev.o
00:07:24.380    LIB libspdk_nbd.a
00:07:24.638    SO libspdk_nbd.so.7.0
00:07:24.638    SYMLINK libspdk_nbd.so
00:07:24.638    LIB libspdk_scsi.a
00:07:24.638    SO libspdk_scsi.so.9.0
00:07:24.638    SYMLINK libspdk_scsi.so
00:07:24.896    LIB libspdk_ublk.a
00:07:24.896    SO libspdk_ublk.so.3.0
00:07:24.896    SYMLINK libspdk_ublk.so
00:07:25.221    CC lib/vhost/vhost.o
00:07:25.221    CC lib/vhost/vhost_rpc.o
00:07:25.221    CC lib/vhost/vhost_scsi.o
00:07:25.221    CC lib/vhost/vhost_blk.o
00:07:25.221    CC lib/vhost/rte_vhost_user.o
00:07:25.221    CC lib/iscsi/init_grp.o
00:07:25.222    CC lib/iscsi/conn.o
00:07:25.222    CC lib/iscsi/iscsi.o
00:07:25.222    CC lib/iscsi/param.o
00:07:25.222    CC lib/iscsi/portal_grp.o
00:07:25.222    CC lib/iscsi/tgt_node.o
00:07:25.222    CC lib/iscsi/iscsi_subsystem.o
00:07:25.222    CC lib/iscsi/iscsi_rpc.o
00:07:25.222    CC lib/iscsi/task.o
00:07:25.222    LIB libspdk_ftl.a
00:07:25.222    SO libspdk_ftl.so.9.0
00:07:25.480    SYMLINK libspdk_ftl.so
00:07:25.737    LIB libspdk_nvmf.a
00:07:25.737    LIB libspdk_vhost.a
00:07:25.737    SO libspdk_nvmf.so.20.0
00:07:25.995    SO libspdk_vhost.so.8.0
00:07:25.995    SYMLINK libspdk_vhost.so
00:07:25.995    SYMLINK libspdk_nvmf.so
00:07:25.995    LIB libspdk_iscsi.a
00:07:26.254    SO libspdk_iscsi.so.8.0
00:07:26.254    SYMLINK libspdk_iscsi.so
00:07:26.822    CC module/env_dpdk/env_dpdk_rpc.o
00:07:26.822    CC module/vfu_device/vfu_virtio_blk.o
00:07:26.822    CC module/vfu_device/vfu_virtio.o
00:07:26.822    CC module/vfu_device/vfu_virtio_scsi.o
00:07:26.822    CC module/vfu_device/vfu_virtio_rpc.o
00:07:26.822    CC module/vfu_device/vfu_virtio_fs.o
00:07:26.822    CC module/blob/bdev/blob_bdev.o
00:07:26.822    CC module/sock/posix/posix.o
00:07:26.822    CC module/accel/ioat/accel_ioat.o
00:07:26.822    CC module/accel/ioat/accel_ioat_rpc.o
00:07:26.822    CC module/accel/iaa/accel_iaa.o
00:07:26.822    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:07:26.822    CC module/accel/iaa/accel_iaa_rpc.o
00:07:26.822    CC module/accel/dsa/accel_dsa.o
00:07:26.822    LIB libspdk_env_dpdk_rpc.a
00:07:26.822    CC module/accel/dsa/accel_dsa_rpc.o
00:07:26.822    CC module/accel/error/accel_error_rpc.o
00:07:26.822    CC module/accel/error/accel_error.o
00:07:26.822    CC module/keyring/linux/keyring.o
00:07:26.822    CC module/keyring/linux/keyring_rpc.o
00:07:26.822    CC module/fsdev/aio/fsdev_aio.o
00:07:26.822    CC module/fsdev/aio/fsdev_aio_rpc.o
00:07:26.822    CC module/scheduler/dynamic/scheduler_dynamic.o
00:07:26.822    CC module/fsdev/aio/linux_aio_mgr.o
00:07:26.822    CC module/keyring/file/keyring.o
00:07:26.822    CC module/scheduler/gscheduler/gscheduler.o
00:07:26.822    CC module/keyring/file/keyring_rpc.o
00:07:27.080    SO libspdk_env_dpdk_rpc.so.6.0
00:07:27.080    SYMLINK libspdk_env_dpdk_rpc.so
00:07:27.080    LIB libspdk_keyring_linux.a
00:07:27.080    LIB libspdk_scheduler_dpdk_governor.a
00:07:27.080    LIB libspdk_keyring_file.a
00:07:27.080    SO libspdk_scheduler_dpdk_governor.so.4.0
00:07:27.080    SO libspdk_keyring_linux.so.1.0
00:07:27.080    LIB libspdk_scheduler_gscheduler.a
00:07:27.080    SO libspdk_keyring_file.so.2.0
00:07:27.080    LIB libspdk_accel_ioat.a
00:07:27.080    LIB libspdk_scheduler_dynamic.a
00:07:27.080    LIB libspdk_accel_error.a
00:07:27.080    LIB libspdk_accel_iaa.a
00:07:27.080    SO libspdk_scheduler_gscheduler.so.4.0
00:07:27.080    SO libspdk_scheduler_dynamic.so.4.0
00:07:27.080    SO libspdk_accel_ioat.so.6.0
00:07:27.080    SYMLINK libspdk_scheduler_dpdk_governor.so
00:07:27.080    SO libspdk_accel_error.so.2.0
00:07:27.080    SYMLINK libspdk_keyring_file.so
00:07:27.080    SO libspdk_accel_iaa.so.3.0
00:07:27.080    SYMLINK libspdk_keyring_linux.so
00:07:27.080    LIB libspdk_blob_bdev.a
00:07:27.080    SYMLINK libspdk_scheduler_gscheduler.so
00:07:27.338    SO libspdk_blob_bdev.so.12.0
00:07:27.338    LIB libspdk_accel_dsa.a
00:07:27.338    SYMLINK libspdk_accel_ioat.so
00:07:27.338    SYMLINK libspdk_scheduler_dynamic.so
00:07:27.338    SYMLINK libspdk_accel_error.so
00:07:27.338    SYMLINK libspdk_accel_iaa.so
00:07:27.338    SO libspdk_accel_dsa.so.5.0
00:07:27.338    SYMLINK libspdk_blob_bdev.so
00:07:27.338    SYMLINK libspdk_accel_dsa.so
00:07:27.338    LIB libspdk_vfu_device.a
00:07:27.338    SO libspdk_vfu_device.so.3.0
00:07:27.338    SYMLINK libspdk_vfu_device.so
00:07:27.596    LIB libspdk_fsdev_aio.a
00:07:27.596    LIB libspdk_sock_posix.a
00:07:27.596    SO libspdk_fsdev_aio.so.1.0
00:07:27.596    SO libspdk_sock_posix.so.6.0
00:07:27.596    SYMLINK libspdk_fsdev_aio.so
00:07:27.596    SYMLINK libspdk_sock_posix.so
00:07:27.596    CC module/bdev/delay/vbdev_delay.o
00:07:27.596    CC module/bdev/delay/vbdev_delay_rpc.o
00:07:27.596    CC module/bdev/error/vbdev_error.o
00:07:27.596    CC module/bdev/error/vbdev_error_rpc.o
00:07:27.596    CC module/bdev/raid/bdev_raid.o
00:07:27.596    CC module/bdev/raid/bdev_raid_rpc.o
00:07:27.596    CC module/bdev/raid/bdev_raid_sb.o
00:07:27.596    CC module/bdev/raid/raid0.o
00:07:27.596    CC module/blobfs/bdev/blobfs_bdev.o
00:07:27.596    CC module/bdev/malloc/bdev_malloc_rpc.o
00:07:27.596    CC module/bdev/virtio/bdev_virtio_scsi.o
00:07:27.596    CC module/bdev/malloc/bdev_malloc.o
00:07:27.596    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:07:27.596    CC module/bdev/raid/raid1.o
00:07:27.596    CC module/bdev/split/vbdev_split.o
00:07:27.596    CC module/bdev/lvol/vbdev_lvol.o
00:07:27.596    CC module/bdev/virtio/bdev_virtio_blk.o
00:07:27.596    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:07:27.596    CC module/bdev/gpt/gpt.o
00:07:27.596    CC module/bdev/raid/concat.o
00:07:27.596    CC module/bdev/nvme/bdev_nvme.o
00:07:27.596    CC module/bdev/virtio/bdev_virtio_rpc.o
00:07:27.596    CC module/bdev/split/vbdev_split_rpc.o
00:07:27.596    CC module/bdev/gpt/vbdev_gpt.o
00:07:27.596    CC module/bdev/nvme/bdev_nvme_rpc.o
00:07:27.596    CC module/bdev/nvme/bdev_mdns_client.o
00:07:27.854    CC module/bdev/nvme/nvme_rpc.o
00:07:27.854    CC module/bdev/ftl/bdev_ftl.o
00:07:27.854    CC module/bdev/ftl/bdev_ftl_rpc.o
00:07:27.854    CC module/bdev/nvme/vbdev_opal.o
00:07:27.854    CC module/bdev/nvme/vbdev_opal_rpc.o
00:07:27.854    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:07:27.854    CC module/bdev/aio/bdev_aio.o
00:07:27.854    CC module/bdev/null/bdev_null.o
00:07:27.854    CC module/bdev/aio/bdev_aio_rpc.o
00:07:27.854    CC module/bdev/zone_block/vbdev_zone_block.o
00:07:27.854    CC module/bdev/null/bdev_null_rpc.o
00:07:27.854    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:07:27.854    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:07:27.854    CC module/bdev/passthru/vbdev_passthru.o
00:07:27.854    CC module/bdev/iscsi/bdev_iscsi.o
00:07:27.854    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:07:28.112    LIB libspdk_blobfs_bdev.a
00:07:28.112    LIB libspdk_bdev_split.a
00:07:28.112    SO libspdk_blobfs_bdev.so.6.0
00:07:28.112    SO libspdk_bdev_split.so.6.0
00:07:28.112    LIB libspdk_bdev_error.a
00:07:28.112    SO libspdk_bdev_error.so.6.0
00:07:28.112    LIB libspdk_bdev_null.a
00:07:28.112    LIB libspdk_bdev_delay.a
00:07:28.112    SYMLINK libspdk_blobfs_bdev.so
00:07:28.112    SYMLINK libspdk_bdev_split.so
00:07:28.112    LIB libspdk_bdev_gpt.a
00:07:28.112    LIB libspdk_bdev_zone_block.a
00:07:28.112    LIB libspdk_bdev_ftl.a
00:07:28.112    LIB libspdk_bdev_malloc.a
00:07:28.112    SO libspdk_bdev_null.so.6.0
00:07:28.112    SO libspdk_bdev_zone_block.so.6.0
00:07:28.112    LIB libspdk_bdev_passthru.a
00:07:28.112    LIB libspdk_bdev_iscsi.a
00:07:28.112    SO libspdk_bdev_delay.so.6.0
00:07:28.112    SO libspdk_bdev_gpt.so.6.0
00:07:28.112    SYMLINK libspdk_bdev_error.so
00:07:28.112    LIB libspdk_bdev_aio.a
00:07:28.112    SO libspdk_bdev_ftl.so.6.0
00:07:28.112    SO libspdk_bdev_malloc.so.6.0
00:07:28.112    SO libspdk_bdev_iscsi.so.6.0
00:07:28.112    SO libspdk_bdev_passthru.so.6.0
00:07:28.112    SYMLINK libspdk_bdev_null.so
00:07:28.112    SO libspdk_bdev_aio.so.6.0
00:07:28.112    SYMLINK libspdk_bdev_zone_block.so
00:07:28.112    SYMLINK libspdk_bdev_delay.so
00:07:28.112    SYMLINK libspdk_bdev_ftl.so
00:07:28.112    SYMLINK libspdk_bdev_gpt.so
00:07:28.112    SYMLINK libspdk_bdev_malloc.so
00:07:28.112    SYMLINK libspdk_bdev_iscsi.so
00:07:28.112    SYMLINK libspdk_bdev_passthru.so
00:07:28.370    SYMLINK libspdk_bdev_aio.so
00:07:28.370    LIB libspdk_bdev_lvol.a
00:07:28.370    LIB libspdk_bdev_virtio.a
00:07:28.370    SO libspdk_bdev_virtio.so.6.0
00:07:28.370    SO libspdk_bdev_lvol.so.6.0
00:07:28.370    SYMLINK libspdk_bdev_virtio.so
00:07:28.370    SYMLINK libspdk_bdev_lvol.so
00:07:28.628    LIB libspdk_bdev_raid.a
00:07:28.628    SO libspdk_bdev_raid.so.6.0
00:07:28.628    SYMLINK libspdk_bdev_raid.so
00:07:29.562    LIB libspdk_bdev_nvme.a
00:07:29.820    SO libspdk_bdev_nvme.so.7.1
00:07:29.820    SYMLINK libspdk_bdev_nvme.so
00:07:30.391    CC module/event/subsystems/fsdev/fsdev.o
00:07:30.391    CC module/event/subsystems/iobuf/iobuf.o
00:07:30.391    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:07:30.391    CC module/event/subsystems/scheduler/scheduler.o
00:07:30.391    CC module/event/subsystems/vmd/vmd.o
00:07:30.391    CC module/event/subsystems/vfu_tgt/vfu_tgt.o
00:07:30.391    CC module/event/subsystems/vmd/vmd_rpc.o
00:07:30.391    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:07:30.391    CC module/event/subsystems/keyring/keyring.o
00:07:30.391    CC module/event/subsystems/sock/sock.o
00:07:30.650    LIB libspdk_event_vhost_blk.a
00:07:30.650    LIB libspdk_event_fsdev.a
00:07:30.650    LIB libspdk_event_scheduler.a
00:07:30.650    LIB libspdk_event_iobuf.a
00:07:30.650    LIB libspdk_event_keyring.a
00:07:30.650    LIB libspdk_event_vfu_tgt.a
00:07:30.650    LIB libspdk_event_vmd.a
00:07:30.650    LIB libspdk_event_sock.a
00:07:30.650    SO libspdk_event_vhost_blk.so.3.0
00:07:30.650    SO libspdk_event_fsdev.so.1.0
00:07:30.650    SO libspdk_event_scheduler.so.4.0
00:07:30.650    SO libspdk_event_iobuf.so.3.0
00:07:30.650    SO libspdk_event_keyring.so.1.0
00:07:30.650    SO libspdk_event_vmd.so.6.0
00:07:30.650    SO libspdk_event_vfu_tgt.so.3.0
00:07:30.650    SO libspdk_event_sock.so.5.0
00:07:30.650    SYMLINK libspdk_event_vhost_blk.so
00:07:30.650    SYMLINK libspdk_event_fsdev.so
00:07:30.650    SYMLINK libspdk_event_scheduler.so
00:07:30.650    SYMLINK libspdk_event_keyring.so
00:07:30.650    SYMLINK libspdk_event_iobuf.so
00:07:30.650    SYMLINK libspdk_event_vmd.so
00:07:30.650    SYMLINK libspdk_event_vfu_tgt.so
00:07:30.650    SYMLINK libspdk_event_sock.so
00:07:30.908    CC module/event/subsystems/accel/accel.o
00:07:31.167    LIB libspdk_event_accel.a
00:07:31.167    SO libspdk_event_accel.so.6.0
00:07:31.167    SYMLINK libspdk_event_accel.so
00:07:31.426    CC module/event/subsystems/bdev/bdev.o
00:07:31.686    LIB libspdk_event_bdev.a
00:07:31.686    SO libspdk_event_bdev.so.6.0
00:07:31.686    SYMLINK libspdk_event_bdev.so
00:07:32.254    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:07:32.254    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:07:32.254    CC module/event/subsystems/scsi/scsi.o
00:07:32.254    CC module/event/subsystems/nbd/nbd.o
00:07:32.254    CC module/event/subsystems/ublk/ublk.o
00:07:32.254    LIB libspdk_event_nbd.a
00:07:32.254    LIB libspdk_event_ublk.a
00:07:32.254    LIB libspdk_event_scsi.a
00:07:32.254    SO libspdk_event_nbd.so.6.0
00:07:32.254    SO libspdk_event_ublk.so.3.0
00:07:32.254    SO libspdk_event_scsi.so.6.0
00:07:32.254    LIB libspdk_event_nvmf.a
00:07:32.254    SYMLINK libspdk_event_nbd.so
00:07:32.254    SO libspdk_event_nvmf.so.6.0
00:07:32.254    SYMLINK libspdk_event_ublk.so
00:07:32.254    SYMLINK libspdk_event_scsi.so
00:07:32.517    SYMLINK libspdk_event_nvmf.so
00:07:32.775    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:07:32.775    CC module/event/subsystems/iscsi/iscsi.o
00:07:32.775    LIB libspdk_event_vhost_scsi.a
00:07:32.775    LIB libspdk_event_iscsi.a
00:07:32.775    SO libspdk_event_vhost_scsi.so.3.0
00:07:32.775    SO libspdk_event_iscsi.so.6.0
00:07:32.775    SYMLINK libspdk_event_vhost_scsi.so
00:07:32.775    SYMLINK libspdk_event_iscsi.so
00:07:33.033    SO libspdk.so.6.0
00:07:33.033    SYMLINK libspdk.so
00:07:33.292    CC app/trace_record/trace_record.o
00:07:33.292    CXX app/trace/trace.o
00:07:33.569    CC app/spdk_nvme_identify/identify.o
00:07:33.569    CC app/spdk_lspci/spdk_lspci.o
00:07:33.569    CC app/spdk_top/spdk_top.o
00:07:33.569    TEST_HEADER include/spdk/accel.h
00:07:33.569    TEST_HEADER include/spdk/accel_module.h
00:07:33.569    TEST_HEADER include/spdk/assert.h
00:07:33.569    TEST_HEADER include/spdk/barrier.h
00:07:33.569    TEST_HEADER include/spdk/base64.h
00:07:33.569    TEST_HEADER include/spdk/bdev.h
00:07:33.569    CC test/rpc_client/rpc_client_test.o
00:07:33.569    CC app/spdk_nvme_perf/perf.o
00:07:33.569    TEST_HEADER include/spdk/bdev_module.h
00:07:33.569    TEST_HEADER include/spdk/bit_array.h
00:07:33.569    TEST_HEADER include/spdk/bdev_zone.h
00:07:33.569    CC app/spdk_nvme_discover/discovery_aer.o
00:07:33.569    TEST_HEADER include/spdk/bit_pool.h
00:07:33.569    TEST_HEADER include/spdk/blobfs_bdev.h
00:07:33.569    TEST_HEADER include/spdk/blob_bdev.h
00:07:33.569    TEST_HEADER include/spdk/blobfs.h
00:07:33.569    TEST_HEADER include/spdk/blob.h
00:07:33.569    TEST_HEADER include/spdk/config.h
00:07:33.569    TEST_HEADER include/spdk/cpuset.h
00:07:33.569    TEST_HEADER include/spdk/conf.h
00:07:33.569    TEST_HEADER include/spdk/crc16.h
00:07:33.569    TEST_HEADER include/spdk/crc32.h
00:07:33.569    TEST_HEADER include/spdk/dif.h
00:07:33.569    TEST_HEADER include/spdk/crc64.h
00:07:33.569    TEST_HEADER include/spdk/dma.h
00:07:33.569    TEST_HEADER include/spdk/env_dpdk.h
00:07:33.569    TEST_HEADER include/spdk/event.h
00:07:33.569    TEST_HEADER include/spdk/endian.h
00:07:33.569    TEST_HEADER include/spdk/env.h
00:07:33.569    TEST_HEADER include/spdk/fsdev.h
00:07:33.569    TEST_HEADER include/spdk/fd_group.h
00:07:33.569    TEST_HEADER include/spdk/file.h
00:07:33.569    TEST_HEADER include/spdk/fd.h
00:07:33.569    TEST_HEADER include/spdk/ftl.h
00:07:33.569    TEST_HEADER include/spdk/fsdev_module.h
00:07:33.569    TEST_HEADER include/spdk/hexlify.h
00:07:33.569    TEST_HEADER include/spdk/gpt_spec.h
00:07:33.569    TEST_HEADER include/spdk/idxd_spec.h
00:07:33.569    TEST_HEADER include/spdk/histogram_data.h
00:07:33.569    TEST_HEADER include/spdk/idxd.h
00:07:33.569    TEST_HEADER include/spdk/init.h
00:07:33.569    TEST_HEADER include/spdk/ioat.h
00:07:33.569    TEST_HEADER include/spdk/ioat_spec.h
00:07:33.569    CC examples/interrupt_tgt/interrupt_tgt.o
00:07:33.569    TEST_HEADER include/spdk/json.h
00:07:33.569    TEST_HEADER include/spdk/iscsi_spec.h
00:07:33.569    TEST_HEADER include/spdk/keyring_module.h
00:07:33.569    TEST_HEADER include/spdk/likely.h
00:07:33.569    TEST_HEADER include/spdk/keyring.h
00:07:33.569    TEST_HEADER include/spdk/jsonrpc.h
00:07:33.569    TEST_HEADER include/spdk/log.h
00:07:33.569    TEST_HEADER include/spdk/lvol.h
00:07:33.569    TEST_HEADER include/spdk/md5.h
00:07:33.569    TEST_HEADER include/spdk/nbd.h
00:07:33.569    TEST_HEADER include/spdk/mmio.h
00:07:33.569    TEST_HEADER include/spdk/memory.h
00:07:33.569    CC app/spdk_dd/spdk_dd.o
00:07:33.570    TEST_HEADER include/spdk/net.h
00:07:33.570    TEST_HEADER include/spdk/nvme.h
00:07:33.570    TEST_HEADER include/spdk/notify.h
00:07:33.570    TEST_HEADER include/spdk/nvme_intel.h
00:07:33.570    TEST_HEADER include/spdk/nvme_ocssd.h
00:07:33.570    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:07:33.570    TEST_HEADER include/spdk/nvme_spec.h
00:07:33.570    TEST_HEADER include/spdk/nvme_zns.h
00:07:33.570    TEST_HEADER include/spdk/nvmf_cmd.h
00:07:33.570    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:07:33.570    TEST_HEADER include/spdk/nvmf.h
00:07:33.570    TEST_HEADER include/spdk/nvmf_spec.h
00:07:33.570    TEST_HEADER include/spdk/opal_spec.h
00:07:33.570    TEST_HEADER include/spdk/nvmf_transport.h
00:07:33.570    TEST_HEADER include/spdk/pipe.h
00:07:33.570    TEST_HEADER include/spdk/opal.h
00:07:33.570    TEST_HEADER include/spdk/pci_ids.h
00:07:33.570    TEST_HEADER include/spdk/queue.h
00:07:33.570    TEST_HEADER include/spdk/reduce.h
00:07:33.570    TEST_HEADER include/spdk/scsi.h
00:07:33.570    TEST_HEADER include/spdk/scheduler.h
00:07:33.570    TEST_HEADER include/spdk/rpc.h
00:07:33.570    TEST_HEADER include/spdk/sock.h
00:07:33.570    CC app/iscsi_tgt/iscsi_tgt.o
00:07:33.570    TEST_HEADER include/spdk/scsi_spec.h
00:07:33.570    CC app/nvmf_tgt/nvmf_main.o
00:07:33.570    TEST_HEADER include/spdk/stdinc.h
00:07:33.570    TEST_HEADER include/spdk/string.h
00:07:33.570    TEST_HEADER include/spdk/trace_parser.h
00:07:33.570    TEST_HEADER include/spdk/thread.h
00:07:33.570    TEST_HEADER include/spdk/tree.h
00:07:33.570    TEST_HEADER include/spdk/trace.h
00:07:33.570    TEST_HEADER include/spdk/ublk.h
00:07:33.570    TEST_HEADER include/spdk/util.h
00:07:33.570    TEST_HEADER include/spdk/uuid.h
00:07:33.570    TEST_HEADER include/spdk/version.h
00:07:33.570    TEST_HEADER include/spdk/vfio_user_pci.h
00:07:33.570    TEST_HEADER include/spdk/vfio_user_spec.h
00:07:33.570    TEST_HEADER include/spdk/vhost.h
00:07:33.570    TEST_HEADER include/spdk/vmd.h
00:07:33.570    TEST_HEADER include/spdk/xor.h
00:07:33.570    TEST_HEADER include/spdk/zipf.h
00:07:33.570    CXX test/cpp_headers/accel.o
00:07:33.570    CXX test/cpp_headers/accel_module.o
00:07:33.570    CXX test/cpp_headers/barrier.o
00:07:33.570    CXX test/cpp_headers/assert.o
00:07:33.570    CXX test/cpp_headers/base64.o
00:07:33.570    CXX test/cpp_headers/bdev.o
00:07:33.570    CXX test/cpp_headers/bdev_module.o
00:07:33.570    CC app/spdk_tgt/spdk_tgt.o
00:07:33.570    CXX test/cpp_headers/bdev_zone.o
00:07:33.570    CXX test/cpp_headers/blob_bdev.o
00:07:33.570    CXX test/cpp_headers/bit_array.o
00:07:33.570    CXX test/cpp_headers/bit_pool.o
00:07:33.570    CXX test/cpp_headers/blobfs_bdev.o
00:07:33.570    CXX test/cpp_headers/blobfs.o
00:07:33.570    CXX test/cpp_headers/config.o
00:07:33.570    CXX test/cpp_headers/conf.o
00:07:33.570    CXX test/cpp_headers/crc16.o
00:07:33.570    CXX test/cpp_headers/blob.o
00:07:33.570    CXX test/cpp_headers/cpuset.o
00:07:33.570    CXX test/cpp_headers/crc32.o
00:07:33.570    CXX test/cpp_headers/crc64.o
00:07:33.570    CXX test/cpp_headers/dma.o
00:07:33.570    CXX test/cpp_headers/dif.o
00:07:33.570    CXX test/cpp_headers/endian.o
00:07:33.570    CXX test/cpp_headers/env_dpdk.o
00:07:33.570    CXX test/cpp_headers/env.o
00:07:33.570    CXX test/cpp_headers/event.o
00:07:33.570    CXX test/cpp_headers/fd.o
00:07:33.570    CXX test/cpp_headers/fd_group.o
00:07:33.570    CXX test/cpp_headers/file.o
00:07:33.570    CXX test/cpp_headers/fsdev_module.o
00:07:33.570    CXX test/cpp_headers/ftl.o
00:07:33.570    CXX test/cpp_headers/fsdev.o
00:07:33.570    CXX test/cpp_headers/gpt_spec.o
00:07:33.570    CXX test/cpp_headers/histogram_data.o
00:07:33.570    CXX test/cpp_headers/hexlify.o
00:07:33.570    CXX test/cpp_headers/idxd.o
00:07:33.570    CXX test/cpp_headers/init.o
00:07:33.570    CXX test/cpp_headers/idxd_spec.o
00:07:33.570    CXX test/cpp_headers/ioat.o
00:07:33.570    CXX test/cpp_headers/ioat_spec.o
00:07:33.570    CXX test/cpp_headers/json.o
00:07:33.570    CXX test/cpp_headers/iscsi_spec.o
00:07:33.570    CXX test/cpp_headers/jsonrpc.o
00:07:33.570    CXX test/cpp_headers/keyring.o
00:07:33.570    CXX test/cpp_headers/keyring_module.o
00:07:33.570    CXX test/cpp_headers/log.o
00:07:33.570    CXX test/cpp_headers/likely.o
00:07:33.570    CXX test/cpp_headers/md5.o
00:07:33.570    CXX test/cpp_headers/lvol.o
00:07:33.570    CXX test/cpp_headers/memory.o
00:07:33.570    CC examples/ioat/perf/perf.o
00:07:33.570    CXX test/cpp_headers/mmio.o
00:07:33.570    CXX test/cpp_headers/nbd.o
00:07:33.570    CXX test/cpp_headers/notify.o
00:07:33.570    CXX test/cpp_headers/net.o
00:07:33.570    CXX test/cpp_headers/nvme.o
00:07:33.570    CXX test/cpp_headers/nvme_intel.o
00:07:33.570    CXX test/cpp_headers/nvme_ocssd.o
00:07:33.570    CXX test/cpp_headers/nvme_ocssd_spec.o
00:07:33.570    CXX test/cpp_headers/nvme_zns.o
00:07:33.570    CXX test/cpp_headers/nvme_spec.o
00:07:33.570    CXX test/cpp_headers/nvmf_cmd.o
00:07:33.570    CXX test/cpp_headers/nvmf_fc_spec.o
00:07:33.570    CXX test/cpp_headers/nvmf.o
00:07:33.570    CXX test/cpp_headers/nvmf_spec.o
00:07:33.570    CXX test/cpp_headers/nvmf_transport.o
00:07:33.570    CXX test/cpp_headers/opal.o
00:07:33.570    CC examples/ioat/verify/verify.o
00:07:33.570    CXX test/cpp_headers/opal_spec.o
00:07:33.570    CC examples/util/zipf/zipf.o
00:07:33.570    CC test/thread/poller_perf/poller_perf.o
00:07:33.570    CC test/app/stub/stub.o
00:07:33.570    CC test/env/memory/memory_ut.o
00:07:33.570    CXX test/cpp_headers/pci_ids.o
00:07:33.570    CC app/fio/nvme/fio_plugin.o
00:07:33.570    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:07:33.570    CC test/env/vtophys/vtophys.o
00:07:33.570    CC test/app/jsoncat/jsoncat.o
00:07:33.840    CC test/env/pci/pci_ut.o
00:07:33.840    CC test/dma/test_dma/test_dma.o
00:07:33.840    CC test/app/histogram_perf/histogram_perf.o
00:07:33.840    LINK spdk_lspci
00:07:33.840    CC app/fio/bdev/fio_plugin.o
00:07:33.840    CC test/app/bdev_svc/bdev_svc.o
00:07:33.840    LINK interrupt_tgt
00:07:33.840    LINK spdk_nvme_discover
00:07:34.102    CC test/env/mem_callbacks/mem_callbacks.o
00:07:34.103    LINK nvmf_tgt
00:07:34.103    LINK rpc_client_test
00:07:34.103    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:07:34.103    LINK iscsi_tgt
00:07:34.103    LINK zipf
00:07:34.103    LINK poller_perf
00:07:34.103    CXX test/cpp_headers/pipe.o
00:07:34.103    CXX test/cpp_headers/queue.o
00:07:34.103    LINK spdk_trace_record
00:07:34.103    CXX test/cpp_headers/reduce.o
00:07:34.103    CXX test/cpp_headers/rpc.o
00:07:34.103    CXX test/cpp_headers/scheduler.o
00:07:34.103    CXX test/cpp_headers/scsi.o
00:07:34.103    CXX test/cpp_headers/scsi_spec.o
00:07:34.103    CXX test/cpp_headers/sock.o
00:07:34.103    CXX test/cpp_headers/stdinc.o
00:07:34.103    CXX test/cpp_headers/string.o
00:07:34.103    CXX test/cpp_headers/thread.o
00:07:34.103    CXX test/cpp_headers/trace.o
00:07:34.103    CXX test/cpp_headers/tree.o
00:07:34.103    CXX test/cpp_headers/ublk.o
00:07:34.103    CXX test/cpp_headers/trace_parser.o
00:07:34.103    CXX test/cpp_headers/uuid.o
00:07:34.103    CXX test/cpp_headers/util.o
00:07:34.103    LINK env_dpdk_post_init
00:07:34.103    CXX test/cpp_headers/version.o
00:07:34.361    CXX test/cpp_headers/vfio_user_pci.o
00:07:34.361    CXX test/cpp_headers/vfio_user_spec.o
00:07:34.361    CXX test/cpp_headers/vhost.o
00:07:34.361    CXX test/cpp_headers/vmd.o
00:07:34.361    LINK jsoncat
00:07:34.361    CXX test/cpp_headers/xor.o
00:07:34.361    CXX test/cpp_headers/zipf.o
00:07:34.361    LINK histogram_perf
00:07:34.361    LINK vtophys
00:07:34.361    LINK spdk_tgt
00:07:34.361    LINK ioat_perf
00:07:34.361    LINK stub
00:07:34.361    LINK verify
00:07:34.361    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:07:34.361    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:07:34.361    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:07:34.361    LINK bdev_svc
00:07:34.361    LINK spdk_dd
00:07:34.618    LINK spdk_trace
00:07:34.618    LINK pci_ut
00:07:34.618    CC test/event/reactor/reactor.o
00:07:34.618    CC examples/idxd/perf/perf.o
00:07:34.618    CC test/event/event_perf/event_perf.o
00:07:34.618    LINK spdk_nvme
00:07:34.618    CC test/event/reactor_perf/reactor_perf.o
00:07:34.618    CC examples/sock/hello_world/hello_sock.o
00:07:34.618    CC examples/vmd/lsvmd/lsvmd.o
00:07:34.618    CC test/event/app_repeat/app_repeat.o
00:07:34.618    LINK test_dma
00:07:34.618    CC examples/vmd/led/led.o
00:07:34.618    CC examples/thread/thread/thread_ex.o
00:07:34.618    CC test/event/scheduler/scheduler.o
00:07:34.618    LINK nvme_fuzz
00:07:34.618    LINK spdk_bdev
00:07:34.876    LINK event_perf
00:07:34.876    LINK reactor
00:07:34.876    LINK reactor_perf
00:07:34.876    LINK vhost_fuzz
00:07:34.876    LINK spdk_top
00:07:34.876    LINK lsvmd
00:07:34.876    LINK spdk_nvme_perf
00:07:34.876    LINK app_repeat
00:07:34.876    LINK led
00:07:34.876    LINK mem_callbacks
00:07:34.876    LINK spdk_nvme_identify
00:07:34.876    LINK hello_sock
00:07:34.876    CC app/vhost/vhost.o
00:07:34.876    LINK idxd_perf
00:07:34.876    LINK scheduler
00:07:34.876    LINK thread
00:07:35.135    LINK vhost
00:07:35.135    CC test/nvme/reset/reset.o
00:07:35.135    CC test/nvme/overhead/overhead.o
00:07:35.135    CC test/nvme/boot_partition/boot_partition.o
00:07:35.135    CC test/nvme/aer/aer.o
00:07:35.135    CC test/nvme/startup/startup.o
00:07:35.135    CC test/nvme/compliance/nvme_compliance.o
00:07:35.135    CC test/nvme/simple_copy/simple_copy.o
00:07:35.135    CC test/nvme/e2edp/nvme_dp.o
00:07:35.135    CC test/nvme/reserve/reserve.o
00:07:35.135    CC test/nvme/doorbell_aers/doorbell_aers.o
00:07:35.135    CC test/nvme/err_injection/err_injection.o
00:07:35.135    CC test/nvme/cuse/cuse.o
00:07:35.135    CC test/nvme/fdp/fdp.o
00:07:35.135    CC test/nvme/sgl/sgl.o
00:07:35.135    CC test/nvme/fused_ordering/fused_ordering.o
00:07:35.135    CC test/nvme/connect_stress/connect_stress.o
00:07:35.135    CC test/accel/dif/dif.o
00:07:35.135    CC test/blobfs/mkfs/mkfs.o
00:07:35.135    LINK memory_ut
00:07:35.392    CC examples/nvme/cmb_copy/cmb_copy.o
00:07:35.392    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:07:35.392    CC test/lvol/esnap/esnap.o
00:07:35.392    CC examples/nvme/hello_world/hello_world.o
00:07:35.393    CC examples/nvme/arbitration/arbitration.o
00:07:35.393    CC examples/nvme/abort/abort.o
00:07:35.393    CC examples/nvme/reconnect/reconnect.o
00:07:35.393    CC examples/nvme/hotplug/hotplug.o
00:07:35.393    CC examples/nvme/nvme_manage/nvme_manage.o
00:07:35.393    LINK boot_partition
00:07:35.393    LINK startup
00:07:35.393    LINK err_injection
00:07:35.393    LINK doorbell_aers
00:07:35.393    LINK connect_stress
00:07:35.393    LINK simple_copy
00:07:35.393    LINK fused_ordering
00:07:35.393    LINK reserve
00:07:35.393    CC examples/accel/perf/accel_perf.o
00:07:35.393    LINK reset
00:07:35.393    LINK sgl
00:07:35.393    CC examples/fsdev/hello_world/hello_fsdev.o
00:07:35.393    LINK aer
00:07:35.393    LINK mkfs
00:07:35.393    LINK nvme_dp
00:07:35.393    CC examples/blob/cli/blobcli.o
00:07:35.650    LINK overhead
00:07:35.650    CC examples/blob/hello_world/hello_blob.o
00:07:35.650    LINK cmb_copy
00:07:35.650    LINK fdp
00:07:35.650    LINK nvme_compliance
00:07:35.650    LINK pmr_persistence
00:07:35.650    LINK hello_world
00:07:35.650    LINK hotplug
00:07:35.650    LINK reconnect
00:07:35.650    LINK arbitration
00:07:35.650    LINK abort
00:07:35.650    LINK hello_fsdev
00:07:35.908    LINK hello_blob
00:07:35.908    LINK nvme_manage
00:07:35.908    LINK dif
00:07:35.908    LINK accel_perf
00:07:35.908    LINK iscsi_fuzz
00:07:35.908    LINK blobcli
00:07:36.166    LINK cuse
00:07:36.424    CC examples/bdev/hello_world/hello_bdev.o
00:07:36.424    CC examples/bdev/bdevperf/bdevperf.o
00:07:36.424    CC test/bdev/bdevio/bdevio.o
00:07:36.682    LINK hello_bdev
00:07:36.682    LINK bdevio
00:07:36.941    LINK bdevperf
00:07:37.508    CC examples/nvmf/nvmf/nvmf.o
00:07:37.766    LINK nvmf
00:07:39.244    LINK esnap
00:07:39.244  
00:07:39.244  real	0m56.496s
00:07:39.244  user	8m26.032s
00:07:39.244  sys	3m48.925s
00:07:39.244   23:48:54 make -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:07:39.244   23:48:54 make -- common/autotest_common.sh@10 -- $ set +x
00:07:39.244  ************************************
00:07:39.244  END TEST make
00:07:39.244  ************************************
00:07:39.244   23:48:54  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:07:39.244   23:48:54  -- pm/common@29 -- $ signal_monitor_resources TERM
00:07:39.244   23:48:54  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:07:39.244   23:48:54  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:07:39.244   23:48:54  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]]
00:07:39.244   23:48:54  -- pm/common@44 -- $ pid=2817417
00:07:39.244   23:48:54  -- pm/common@50 -- $ kill -TERM 2817417
00:07:39.244   23:48:54  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:07:39.244   23:48:54  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]]
00:07:39.244   23:48:54  -- pm/common@44 -- $ pid=2817418
00:07:39.244   23:48:54  -- pm/common@50 -- $ kill -TERM 2817418
00:07:39.244   23:48:54  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:07:39.244   23:48:54  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]]
00:07:39.244   23:48:54  -- pm/common@44 -- $ pid=2817420
00:07:39.244   23:48:54  -- pm/common@50 -- $ kill -TERM 2817420
00:07:39.244   23:48:54  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:07:39.244   23:48:54  -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]]
00:07:39.244   23:48:54  -- pm/common@44 -- $ pid=2817447
00:07:39.244   23:48:54  -- pm/common@50 -- $ sudo -E kill -TERM 2817447
00:07:39.244   23:48:55  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:07:39.244   23:48:55  -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf
00:07:39.503    23:48:55  -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:39.503     23:48:55  -- common/autotest_common.sh@1711 -- # lcov --version
00:07:39.503     23:48:55  -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:39.503    23:48:55  -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:39.503    23:48:55  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:39.503    23:48:55  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:39.503    23:48:55  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:39.503    23:48:55  -- scripts/common.sh@336 -- # IFS=.-:
00:07:39.503    23:48:55  -- scripts/common.sh@336 -- # read -ra ver1
00:07:39.503    23:48:55  -- scripts/common.sh@337 -- # IFS=.-:
00:07:39.503    23:48:55  -- scripts/common.sh@337 -- # read -ra ver2
00:07:39.503    23:48:55  -- scripts/common.sh@338 -- # local 'op=<'
00:07:39.503    23:48:55  -- scripts/common.sh@340 -- # ver1_l=2
00:07:39.503    23:48:55  -- scripts/common.sh@341 -- # ver2_l=1
00:07:39.503    23:48:55  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:39.503    23:48:55  -- scripts/common.sh@344 -- # case "$op" in
00:07:39.503    23:48:55  -- scripts/common.sh@345 -- # : 1
00:07:39.503    23:48:55  -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:39.503    23:48:55  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:39.503     23:48:55  -- scripts/common.sh@365 -- # decimal 1
00:07:39.503     23:48:55  -- scripts/common.sh@353 -- # local d=1
00:07:39.503     23:48:55  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:39.503     23:48:55  -- scripts/common.sh@355 -- # echo 1
00:07:39.503    23:48:55  -- scripts/common.sh@365 -- # ver1[v]=1
00:07:39.503     23:48:55  -- scripts/common.sh@366 -- # decimal 2
00:07:39.503     23:48:55  -- scripts/common.sh@353 -- # local d=2
00:07:39.503     23:48:55  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:39.503     23:48:55  -- scripts/common.sh@355 -- # echo 2
00:07:39.503    23:48:55  -- scripts/common.sh@366 -- # ver2[v]=2
00:07:39.503    23:48:55  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:39.503    23:48:55  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:39.503    23:48:55  -- scripts/common.sh@368 -- # return 0
00:07:39.503    23:48:55  -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:39.503    23:48:55  -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:39.503  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:39.503  		--rc genhtml_branch_coverage=1
00:07:39.503  		--rc genhtml_function_coverage=1
00:07:39.503  		--rc genhtml_legend=1
00:07:39.503  		--rc geninfo_all_blocks=1
00:07:39.503  		--rc geninfo_unexecuted_blocks=1
00:07:39.503  		
00:07:39.503  		'
00:07:39.503    23:48:55  -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:39.503  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:39.503  		--rc genhtml_branch_coverage=1
00:07:39.503  		--rc genhtml_function_coverage=1
00:07:39.503  		--rc genhtml_legend=1
00:07:39.503  		--rc geninfo_all_blocks=1
00:07:39.503  		--rc geninfo_unexecuted_blocks=1
00:07:39.503  		
00:07:39.503  		'
00:07:39.503    23:48:55  -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:39.503  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:39.503  		--rc genhtml_branch_coverage=1
00:07:39.503  		--rc genhtml_function_coverage=1
00:07:39.503  		--rc genhtml_legend=1
00:07:39.503  		--rc geninfo_all_blocks=1
00:07:39.503  		--rc geninfo_unexecuted_blocks=1
00:07:39.503  		
00:07:39.503  		'
00:07:39.503    23:48:55  -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:39.503  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:39.503  		--rc genhtml_branch_coverage=1
00:07:39.503  		--rc genhtml_function_coverage=1
00:07:39.503  		--rc genhtml_legend=1
00:07:39.503  		--rc geninfo_all_blocks=1
00:07:39.503  		--rc geninfo_unexecuted_blocks=1
00:07:39.503  		
00:07:39.503  		'
00:07:39.503   23:48:55  -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:07:39.503     23:48:55  -- nvmf/common.sh@7 -- # uname -s
00:07:39.503    23:48:55  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:07:39.503    23:48:55  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:07:39.503    23:48:55  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:07:39.503    23:48:55  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:07:39.503    23:48:55  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:07:39.503    23:48:55  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:07:39.503    23:48:55  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:07:39.503    23:48:55  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:07:39.503    23:48:55  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:07:39.503     23:48:55  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:07:39.503    23:48:55  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:07:39.503    23:48:55  -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:07:39.503    23:48:55  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:07:39.503    23:48:55  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:07:39.503    23:48:55  -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:07:39.503    23:48:55  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:07:39.503    23:48:55  -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:07:39.503     23:48:55  -- scripts/common.sh@15 -- # shopt -s extglob
00:07:39.503     23:48:55  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:39.503     23:48:55  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:39.503     23:48:55  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:39.503      23:48:55  -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:39.503      23:48:55  -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:39.503      23:48:55  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:39.503      23:48:55  -- paths/export.sh@5 -- # export PATH
00:07:39.503      23:48:55  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:07:39.503    23:48:55  -- nvmf/common.sh@51 -- # : 0
00:07:39.503    23:48:55  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:07:39.503    23:48:55  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:07:39.503    23:48:55  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:07:39.503    23:48:55  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:07:39.503    23:48:55  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:07:39.503    23:48:55  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:07:39.503  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:07:39.503    23:48:55  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:07:39.503    23:48:55  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:07:39.503    23:48:55  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:07:39.503   23:48:55  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:07:39.503    23:48:55  -- spdk/autotest.sh@32 -- # uname -s
00:07:39.503   23:48:55  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:07:39.503   23:48:55  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h'
00:07:39.503   23:48:55  -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps
00:07:39.504   23:48:55  -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t'
00:07:39.504   23:48:55  -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps
00:07:39.504   23:48:55  -- spdk/autotest.sh@44 -- # modprobe nbd
00:07:39.504    23:48:55  -- spdk/autotest.sh@46 -- # type -P udevadm
00:07:39.504   23:48:55  -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm
00:07:39.504   23:48:55  -- spdk/autotest.sh@48 -- # udevadm_pid=2880244
00:07:39.504   23:48:55  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:07:39.504   23:48:55  -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property
00:07:39.504   23:48:55  -- pm/common@17 -- # local monitor
00:07:39.504   23:48:55  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:07:39.504   23:48:55  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:07:39.504   23:48:55  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:07:39.504    23:48:55  -- pm/common@21 -- # date +%s
00:07:39.504   23:48:55  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:07:39.504    23:48:55  -- pm/common@21 -- # date +%s
00:07:39.504   23:48:55  -- pm/common@25 -- # sleep 1
00:07:39.504    23:48:55  -- pm/common@21 -- # date +%s
00:07:39.504    23:48:55  -- pm/common@21 -- # date +%s
00:07:39.504   23:48:55  -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784535
00:07:39.504   23:48:55  -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784535
00:07:39.504   23:48:55  -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784535
00:07:39.504   23:48:55  -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733784535
00:07:39.504  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733784535_collect-vmstat.pm.log
00:07:39.504  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733784535_collect-cpu-load.pm.log
00:07:39.504  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733784535_collect-cpu-temp.pm.log
00:07:39.504  Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733784535_collect-bmc-pm.bmc.pm.log
00:07:40.440   23:48:56  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:07:40.440   23:48:56  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:07:40.440   23:48:56  -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:40.440   23:48:56  -- common/autotest_common.sh@10 -- # set +x
00:07:40.440   23:48:56  -- spdk/autotest.sh@59 -- # create_test_list
00:07:40.440   23:48:56  -- common/autotest_common.sh@752 -- # xtrace_disable
00:07:40.440   23:48:56  -- common/autotest_common.sh@10 -- # set +x
00:07:40.698     23:48:56  -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh
00:07:40.698    23:48:56  -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:07:40.698   23:48:56  -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:07:40.698   23:48:56  -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output
00:07:40.698   23:48:56  -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:07:40.698   23:48:56  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:07:40.698    23:48:56  -- common/autotest_common.sh@1457 -- # uname
00:07:40.698   23:48:56  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:07:40.699   23:48:56  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:07:40.699    23:48:56  -- common/autotest_common.sh@1477 -- # uname
00:07:40.699   23:48:56  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:07:40.699   23:48:56  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:07:40.699   23:48:56  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:07:40.699  lcov: LCOV version 1.15
00:07:40.699   23:48:56  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info
00:07:52.902  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:07:52.902  geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno
00:08:05.105   23:49:20  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:08:05.105   23:49:20  -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:05.105   23:49:20  -- common/autotest_common.sh@10 -- # set +x
00:08:05.105   23:49:20  -- spdk/autotest.sh@78 -- # rm -f
00:08:05.105   23:49:20  -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:08:08.394  0000:5e:00.0 (8086 0a54): Already using the nvme driver
00:08:08.394  0000:00:04.7 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:00:04.6 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:00:04.5 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:00:04.4 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:00:04.3 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:00:04.2 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:00:04.1 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:00:04.0 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:80:04.7 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:80:04.6 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:80:04.5 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:80:04.4 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:80:04.3 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:80:04.2 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:80:04.1 (8086 2021): Already using the ioatdma driver
00:08:08.394  0000:80:04.0 (8086 2021): Already using the ioatdma driver
00:08:08.394   23:49:24  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:08:08.394   23:49:24  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:08:08.394   23:49:24  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:08:08.394   23:49:24  -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:08:08.394   23:49:24  -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:08:08.394   23:49:24  -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:08:08.394   23:49:24  -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme*
00:08:08.394   23:49:24  -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0
00:08:08.394   23:49:24  -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n*
00:08:08.394   23:49:24  -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1
00:08:08.394   23:49:24  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:08:08.394   23:49:24  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:08:08.394   23:49:24  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:08:08.394   23:49:24  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:08:08.394   23:49:24  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:08:08.394   23:49:24  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:08:08.394   23:49:24  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:08:08.394   23:49:24  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:08:08.394   23:49:24  -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:08:08.394  No valid GPT data, bailing
00:08:08.394    23:49:24  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:08:08.394   23:49:24  -- scripts/common.sh@394 -- # pt=
00:08:08.394   23:49:24  -- scripts/common.sh@395 -- # return 1
00:08:08.395   23:49:24  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:08:08.395  1+0 records in
00:08:08.395  1+0 records out
00:08:08.395  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00532901 s, 197 MB/s
00:08:08.395   23:49:24  -- spdk/autotest.sh@105 -- # sync
00:08:08.395   23:49:24  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:08:08.395   23:49:24  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:08:08.395    23:49:24  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:08:13.665    23:49:29  -- spdk/autotest.sh@111 -- # uname -s
00:08:13.665   23:49:29  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:08:13.665   23:49:29  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:08:13.665   23:49:29  -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status
00:08:16.951  Hugepages
00:08:16.951  node     hugesize     free /  total
00:08:16.951  node0   1048576kB        0 /      0
00:08:16.951  node0      2048kB        0 /      0
00:08:16.951  node1   1048576kB        0 /      0
00:08:16.951  node1      2048kB        0 /      0
00:08:16.951  
00:08:16.951  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:08:16.951  I/OAT                     0000:00:04.0    8086   2021   0       ioatdma          -          -
00:08:16.951  I/OAT                     0000:00:04.1    8086   2021   0       ioatdma          -          -
00:08:16.951  I/OAT                     0000:00:04.2    8086   2021   0       ioatdma          -          -
00:08:16.951  I/OAT                     0000:00:04.3    8086   2021   0       ioatdma          -          -
00:08:16.951  I/OAT                     0000:00:04.4    8086   2021   0       ioatdma          -          -
00:08:16.951  I/OAT                     0000:00:04.5    8086   2021   0       ioatdma          -          -
00:08:16.951  I/OAT                     0000:00:04.6    8086   2021   0       ioatdma          -          -
00:08:16.951  I/OAT                     0000:00:04.7    8086   2021   0       ioatdma          -          -
00:08:16.951  NVMe                      0000:5e:00.0    8086   0a54   0       nvme             nvme0      nvme0n1
00:08:16.951  I/OAT                     0000:80:04.0    8086   2021   1       ioatdma          -          -
00:08:16.951  I/OAT                     0000:80:04.1    8086   2021   1       ioatdma          -          -
00:08:16.951  I/OAT                     0000:80:04.2    8086   2021   1       ioatdma          -          -
00:08:16.951  I/OAT                     0000:80:04.3    8086   2021   1       ioatdma          -          -
00:08:16.951  I/OAT                     0000:80:04.4    8086   2021   1       ioatdma          -          -
00:08:16.951  I/OAT                     0000:80:04.5    8086   2021   1       ioatdma          -          -
00:08:16.951  I/OAT                     0000:80:04.6    8086   2021   1       ioatdma          -          -
00:08:16.951  I/OAT                     0000:80:04.7    8086   2021   1       ioatdma          -          -
00:08:16.951    23:49:32  -- spdk/autotest.sh@117 -- # uname -s
00:08:16.951   23:49:32  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:08:16.951   23:49:32  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:08:16.951   23:49:32  -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:08:19.486  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:08:19.486  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:08:19.486  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:08:19.486  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:08:19.486  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:08:19.486  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:08:19.486  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:08:19.486  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:08:19.486  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:08:19.486  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:08:19.486  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:08:19.486  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:08:19.486  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:08:19.745  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:08:19.745  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:08:19.745  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:08:20.683  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:08:20.683   23:49:36  -- common/autotest_common.sh@1517 -- # sleep 1
00:08:21.621   23:49:37  -- common/autotest_common.sh@1518 -- # bdfs=()
00:08:21.621   23:49:37  -- common/autotest_common.sh@1518 -- # local bdfs
00:08:21.621   23:49:37  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:08:21.621    23:49:37  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:08:21.621    23:49:37  -- common/autotest_common.sh@1498 -- # bdfs=()
00:08:21.621    23:49:37  -- common/autotest_common.sh@1498 -- # local bdfs
00:08:21.621    23:49:37  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:08:21.621     23:49:37  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh
00:08:21.621     23:49:37  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:08:21.621    23:49:37  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:08:21.621    23:49:37  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0
00:08:21.621   23:49:37  -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:08:24.912  Waiting for block devices as requested
00:08:24.912  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:08:24.912  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:08:24.912  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:08:24.912  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:08:24.912  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:08:24.912  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:08:24.912  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:08:24.912  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:08:25.171  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:08:25.171  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:08:25.171  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:08:25.430  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:08:25.430  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:08:25.430  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:08:25.430  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:08:25.689  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:08:25.689  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:08:25.689   23:49:41  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:08:25.689    23:49:41  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0
00:08:25.689     23:49:41  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0
00:08:25.689     23:49:41  -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme
00:08:25.689    23:49:41  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0
00:08:25.689    23:49:41  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]]
00:08:25.689     23:49:41  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0
00:08:25.689    23:49:41  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:08:25.689   23:49:41  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:08:25.689   23:49:41  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:08:25.689    23:49:41  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:08:25.689    23:49:41  -- common/autotest_common.sh@1531 -- # grep oacs
00:08:25.689    23:49:41  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:08:25.948   23:49:41  -- common/autotest_common.sh@1531 -- # oacs=' 0xf'
00:08:25.948   23:49:41  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:08:25.948   23:49:41  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:08:25.948    23:49:41  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:08:25.948    23:49:41  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:08:25.948    23:49:41  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:08:25.948   23:49:41  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:08:25.948   23:49:41  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:08:25.948   23:49:41  -- common/autotest_common.sh@1543 -- # continue
00:08:25.948   23:49:41  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:08:25.948   23:49:41  -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:25.948   23:49:41  -- common/autotest_common.sh@10 -- # set +x
00:08:25.948   23:49:41  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:08:25.948   23:49:41  -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:25.948   23:49:41  -- common/autotest_common.sh@10 -- # set +x
00:08:25.948   23:49:41  -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:08:29.239  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:08:29.239  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:08:29.496  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:08:29.755   23:49:45  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:08:29.755   23:49:45  -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:29.755   23:49:45  -- common/autotest_common.sh@10 -- # set +x
00:08:29.755   23:49:45  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:08:29.755   23:49:45  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:08:29.755    23:49:45  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:08:29.755    23:49:45  -- common/autotest_common.sh@1563 -- # bdfs=()
00:08:29.755    23:49:45  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:08:29.755    23:49:45  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:08:29.755    23:49:45  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:08:29.755     23:49:45  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:08:29.755     23:49:45  -- common/autotest_common.sh@1498 -- # bdfs=()
00:08:29.755     23:49:45  -- common/autotest_common.sh@1498 -- # local bdfs
00:08:29.755     23:49:45  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:08:29.755      23:49:45  -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh
00:08:29.755      23:49:45  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:08:29.755     23:49:45  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:08:29.755     23:49:45  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0
00:08:29.755    23:49:45  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:08:29.756     23:49:45  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device
00:08:29.756    23:49:45  -- common/autotest_common.sh@1566 -- # device=0x0a54
00:08:29.756    23:49:45  -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]]
00:08:29.756    23:49:45  -- common/autotest_common.sh@1568 -- # bdfs+=($bdf)
00:08:29.756    23:49:45  -- common/autotest_common.sh@1572 -- # (( 1 > 0 ))
00:08:29.756    23:49:45  -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0
00:08:29.756   23:49:45  -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]]
00:08:29.756   23:49:45  -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2894155
00:08:29.756   23:49:45  -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:08:29.756   23:49:45  -- common/autotest_common.sh@1585 -- # waitforlisten 2894155
00:08:29.756   23:49:45  -- common/autotest_common.sh@835 -- # '[' -z 2894155 ']'
00:08:29.756   23:49:45  -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:29.756   23:49:45  -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:29.756   23:49:45  -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:29.756  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:29.756   23:49:45  -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:29.756   23:49:45  -- common/autotest_common.sh@10 -- # set +x
00:08:29.756  [2024-12-09 23:49:45.612848] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:08:29.756  [2024-12-09 23:49:45.612896] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894155 ]
00:08:30.014  [2024-12-09 23:49:45.687316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:30.014  [2024-12-09 23:49:45.728972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:30.272   23:49:45  -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:30.272   23:49:45  -- common/autotest_common.sh@868 -- # return 0
00:08:30.272   23:49:45  -- common/autotest_common.sh@1587 -- # bdf_id=0
00:08:30.272   23:49:45  -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}"
00:08:30.272   23:49:45  -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0
00:08:33.559  nvme0n1
00:08:33.559   23:49:48  -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test
00:08:33.559  [2024-12-09 23:49:49.132514] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18
00:08:33.559  [2024-12-09 23:49:49.132543] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18
00:08:33.559  request:
00:08:33.559  {
00:08:33.559    "nvme_ctrlr_name": "nvme0",
00:08:33.559    "password": "test",
00:08:33.559    "method": "bdev_nvme_opal_revert",
00:08:33.559    "req_id": 1
00:08:33.559  }
00:08:33.559  Got JSON-RPC error response
00:08:33.559  response:
00:08:33.559  {
00:08:33.559    "code": -32603,
00:08:33.559    "message": "Internal error"
00:08:33.559  }
00:08:33.559   23:49:49  -- common/autotest_common.sh@1591 -- # true
00:08:33.559   23:49:49  -- common/autotest_common.sh@1592 -- # (( ++bdf_id ))
00:08:33.559   23:49:49  -- common/autotest_common.sh@1595 -- # killprocess 2894155
00:08:33.559   23:49:49  -- common/autotest_common.sh@954 -- # '[' -z 2894155 ']'
00:08:33.559   23:49:49  -- common/autotest_common.sh@958 -- # kill -0 2894155
00:08:33.559    23:49:49  -- common/autotest_common.sh@959 -- # uname
00:08:33.559   23:49:49  -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:33.559    23:49:49  -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2894155
00:08:33.559   23:49:49  -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:33.559   23:49:49  -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:33.559   23:49:49  -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2894155'
00:08:33.559  killing process with pid 2894155
00:08:33.559   23:49:49  -- common/autotest_common.sh@973 -- # kill 2894155
00:08:33.559   23:49:49  -- common/autotest_common.sh@978 -- # wait 2894155
00:08:34.936   23:49:50  -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']'
00:08:34.936   23:49:50  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:08:34.936   23:49:50  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:08:34.936   23:49:50  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:08:34.936   23:49:50  -- spdk/autotest.sh@149 -- # timing_enter lib
00:08:34.936   23:49:50  -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:34.936   23:49:50  -- common/autotest_common.sh@10 -- # set +x
00:08:34.936   23:49:50  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:08:34.936   23:49:50  -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh
00:08:34.936   23:49:50  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:34.936   23:49:50  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:34.936   23:49:50  -- common/autotest_common.sh@10 -- # set +x
00:08:35.195  ************************************
00:08:35.195  START TEST env
00:08:35.195  ************************************
00:08:35.196   23:49:50 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh
00:08:35.196  * Looking for test storage...
00:08:35.196  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env
00:08:35.196    23:49:50 env -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:35.196     23:49:50 env -- common/autotest_common.sh@1711 -- # lcov --version
00:08:35.196     23:49:50 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:35.196    23:49:50 env -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:35.196    23:49:50 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:35.196    23:49:50 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:35.196    23:49:50 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:35.196    23:49:50 env -- scripts/common.sh@336 -- # IFS=.-:
00:08:35.196    23:49:50 env -- scripts/common.sh@336 -- # read -ra ver1
00:08:35.196    23:49:50 env -- scripts/common.sh@337 -- # IFS=.-:
00:08:35.196    23:49:50 env -- scripts/common.sh@337 -- # read -ra ver2
00:08:35.196    23:49:50 env -- scripts/common.sh@338 -- # local 'op=<'
00:08:35.196    23:49:50 env -- scripts/common.sh@340 -- # ver1_l=2
00:08:35.196    23:49:50 env -- scripts/common.sh@341 -- # ver2_l=1
00:08:35.196    23:49:50 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:35.196    23:49:50 env -- scripts/common.sh@344 -- # case "$op" in
00:08:35.196    23:49:50 env -- scripts/common.sh@345 -- # : 1
00:08:35.196    23:49:50 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:35.196    23:49:50 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:35.196     23:49:50 env -- scripts/common.sh@365 -- # decimal 1
00:08:35.196     23:49:50 env -- scripts/common.sh@353 -- # local d=1
00:08:35.196     23:49:50 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:35.196     23:49:50 env -- scripts/common.sh@355 -- # echo 1
00:08:35.196    23:49:50 env -- scripts/common.sh@365 -- # ver1[v]=1
00:08:35.196     23:49:50 env -- scripts/common.sh@366 -- # decimal 2
00:08:35.196     23:49:50 env -- scripts/common.sh@353 -- # local d=2
00:08:35.196     23:49:50 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:35.196     23:49:50 env -- scripts/common.sh@355 -- # echo 2
00:08:35.196    23:49:50 env -- scripts/common.sh@366 -- # ver2[v]=2
00:08:35.196    23:49:50 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:35.196    23:49:50 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:35.196    23:49:50 env -- scripts/common.sh@368 -- # return 0
00:08:35.196    23:49:50 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:35.196    23:49:50 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:35.196  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:35.196  		--rc genhtml_branch_coverage=1
00:08:35.196  		--rc genhtml_function_coverage=1
00:08:35.196  		--rc genhtml_legend=1
00:08:35.196  		--rc geninfo_all_blocks=1
00:08:35.196  		--rc geninfo_unexecuted_blocks=1
00:08:35.196  		
00:08:35.196  		'
00:08:35.196    23:49:50 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:35.196  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:35.196  		--rc genhtml_branch_coverage=1
00:08:35.196  		--rc genhtml_function_coverage=1
00:08:35.196  		--rc genhtml_legend=1
00:08:35.196  		--rc geninfo_all_blocks=1
00:08:35.196  		--rc geninfo_unexecuted_blocks=1
00:08:35.196  		
00:08:35.196  		'
00:08:35.196    23:49:50 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:35.196  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:35.196  		--rc genhtml_branch_coverage=1
00:08:35.196  		--rc genhtml_function_coverage=1
00:08:35.196  		--rc genhtml_legend=1
00:08:35.196  		--rc geninfo_all_blocks=1
00:08:35.196  		--rc geninfo_unexecuted_blocks=1
00:08:35.196  		
00:08:35.196  		'
00:08:35.196    23:49:50 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:35.196  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:35.196  		--rc genhtml_branch_coverage=1
00:08:35.196  		--rc genhtml_function_coverage=1
00:08:35.196  		--rc genhtml_legend=1
00:08:35.196  		--rc geninfo_all_blocks=1
00:08:35.196  		--rc geninfo_unexecuted_blocks=1
00:08:35.196  		
00:08:35.196  		'
00:08:35.196   23:49:50 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut
00:08:35.196   23:49:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:35.196   23:49:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:35.196   23:49:50 env -- common/autotest_common.sh@10 -- # set +x
00:08:35.196  ************************************
00:08:35.196  START TEST env_memory
00:08:35.196  ************************************
00:08:35.196   23:49:51 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut
00:08:35.196  
00:08:35.196  
00:08:35.196       CUnit - A unit testing framework for C - Version 2.1-3
00:08:35.196       http://cunit.sourceforge.net/
00:08:35.196  
00:08:35.196  
00:08:35.196  Suite: memory
00:08:35.455    Test: alloc and free memory map ...[2024-12-09 23:49:51.059186] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:08:35.455  passed
00:08:35.455    Test: mem map translation ...[2024-12-09 23:49:51.076827] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:08:35.455  [2024-12-09 23:49:51.076841] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:08:35.455  [2024-12-09 23:49:51.076875] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:08:35.455  [2024-12-09 23:49:51.076881] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:08:35.455  passed
00:08:35.455    Test: mem map registration ...[2024-12-09 23:49:51.112536] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:08:35.455  [2024-12-09 23:49:51.112549] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:08:35.455  passed
00:08:35.455    Test: mem map adjacent registrations ...passed
00:08:35.455  
00:08:35.455  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:35.455                suites      1      1    n/a      0        0
00:08:35.455                 tests      4      4      4      0        0
00:08:35.455               asserts    152    152    152      0      n/a
00:08:35.455  
00:08:35.455  Elapsed time =    0.130 seconds
00:08:35.455  
00:08:35.455  real	0m0.142s
00:08:35.455  user	0m0.132s
00:08:35.455  sys	0m0.010s
00:08:35.455   23:49:51 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:35.455   23:49:51 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:08:35.455  ************************************
00:08:35.455  END TEST env_memory
00:08:35.455  ************************************
00:08:35.455   23:49:51 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys
00:08:35.455   23:49:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:35.455   23:49:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:35.455   23:49:51 env -- common/autotest_common.sh@10 -- # set +x
00:08:35.455  ************************************
00:08:35.455  START TEST env_vtophys
00:08:35.455  ************************************
00:08:35.455   23:49:51 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys
00:08:35.455  EAL: lib.eal log level changed from notice to debug
00:08:35.455  EAL: Detected lcore 0 as core 0 on socket 0
00:08:35.455  EAL: Detected lcore 1 as core 1 on socket 0
00:08:35.455  EAL: Detected lcore 2 as core 2 on socket 0
00:08:35.455  EAL: Detected lcore 3 as core 3 on socket 0
00:08:35.455  EAL: Detected lcore 4 as core 4 on socket 0
00:08:35.455  EAL: Detected lcore 5 as core 5 on socket 0
00:08:35.455  EAL: Detected lcore 6 as core 6 on socket 0
00:08:35.455  EAL: Detected lcore 7 as core 8 on socket 0
00:08:35.455  EAL: Detected lcore 8 as core 9 on socket 0
00:08:35.455  EAL: Detected lcore 9 as core 10 on socket 0
00:08:35.455  EAL: Detected lcore 10 as core 11 on socket 0
00:08:35.455  EAL: Detected lcore 11 as core 12 on socket 0
00:08:35.455  EAL: Detected lcore 12 as core 13 on socket 0
00:08:35.455  EAL: Detected lcore 13 as core 16 on socket 0
00:08:35.455  EAL: Detected lcore 14 as core 17 on socket 0
00:08:35.455  EAL: Detected lcore 15 as core 18 on socket 0
00:08:35.455  EAL: Detected lcore 16 as core 19 on socket 0
00:08:35.455  EAL: Detected lcore 17 as core 20 on socket 0
00:08:35.455  EAL: Detected lcore 18 as core 21 on socket 0
00:08:35.455  EAL: Detected lcore 19 as core 25 on socket 0
00:08:35.455  EAL: Detected lcore 20 as core 26 on socket 0
00:08:35.455  EAL: Detected lcore 21 as core 27 on socket 0
00:08:35.455  EAL: Detected lcore 22 as core 28 on socket 0
00:08:35.455  EAL: Detected lcore 23 as core 29 on socket 0
00:08:35.455  EAL: Detected lcore 24 as core 0 on socket 1
00:08:35.455  EAL: Detected lcore 25 as core 1 on socket 1
00:08:35.455  EAL: Detected lcore 26 as core 2 on socket 1
00:08:35.455  EAL: Detected lcore 27 as core 3 on socket 1
00:08:35.455  EAL: Detected lcore 28 as core 4 on socket 1
00:08:35.455  EAL: Detected lcore 29 as core 5 on socket 1
00:08:35.455  EAL: Detected lcore 30 as core 6 on socket 1
00:08:35.455  EAL: Detected lcore 31 as core 8 on socket 1
00:08:35.455  EAL: Detected lcore 32 as core 9 on socket 1
00:08:35.455  EAL: Detected lcore 33 as core 10 on socket 1
00:08:35.455  EAL: Detected lcore 34 as core 11 on socket 1
00:08:35.455  EAL: Detected lcore 35 as core 12 on socket 1
00:08:35.455  EAL: Detected lcore 36 as core 13 on socket 1
00:08:35.455  EAL: Detected lcore 37 as core 16 on socket 1
00:08:35.455  EAL: Detected lcore 38 as core 17 on socket 1
00:08:35.455  EAL: Detected lcore 39 as core 18 on socket 1
00:08:35.455  EAL: Detected lcore 40 as core 19 on socket 1
00:08:35.455  EAL: Detected lcore 41 as core 20 on socket 1
00:08:35.455  EAL: Detected lcore 42 as core 21 on socket 1
00:08:35.455  EAL: Detected lcore 43 as core 25 on socket 1
00:08:35.455  EAL: Detected lcore 44 as core 26 on socket 1
00:08:35.455  EAL: Detected lcore 45 as core 27 on socket 1
00:08:35.455  EAL: Detected lcore 46 as core 28 on socket 1
00:08:35.455  EAL: Detected lcore 47 as core 29 on socket 1
00:08:35.455  EAL: Detected lcore 48 as core 0 on socket 0
00:08:35.455  EAL: Detected lcore 49 as core 1 on socket 0
00:08:35.455  EAL: Detected lcore 50 as core 2 on socket 0
00:08:35.455  EAL: Detected lcore 51 as core 3 on socket 0
00:08:35.455  EAL: Detected lcore 52 as core 4 on socket 0
00:08:35.455  EAL: Detected lcore 53 as core 5 on socket 0
00:08:35.455  EAL: Detected lcore 54 as core 6 on socket 0
00:08:35.455  EAL: Detected lcore 55 as core 8 on socket 0
00:08:35.455  EAL: Detected lcore 56 as core 9 on socket 0
00:08:35.455  EAL: Detected lcore 57 as core 10 on socket 0
00:08:35.455  EAL: Detected lcore 58 as core 11 on socket 0
00:08:35.455  EAL: Detected lcore 59 as core 12 on socket 0
00:08:35.455  EAL: Detected lcore 60 as core 13 on socket 0
00:08:35.455  EAL: Detected lcore 61 as core 16 on socket 0
00:08:35.455  EAL: Detected lcore 62 as core 17 on socket 0
00:08:35.455  EAL: Detected lcore 63 as core 18 on socket 0
00:08:35.455  EAL: Detected lcore 64 as core 19 on socket 0
00:08:35.455  EAL: Detected lcore 65 as core 20 on socket 0
00:08:35.455  EAL: Detected lcore 66 as core 21 on socket 0
00:08:35.455  EAL: Detected lcore 67 as core 25 on socket 0
00:08:35.455  EAL: Detected lcore 68 as core 26 on socket 0
00:08:35.455  EAL: Detected lcore 69 as core 27 on socket 0
00:08:35.455  EAL: Detected lcore 70 as core 28 on socket 0
00:08:35.455  EAL: Detected lcore 71 as core 29 on socket 0
00:08:35.455  EAL: Detected lcore 72 as core 0 on socket 1
00:08:35.455  EAL: Detected lcore 73 as core 1 on socket 1
00:08:35.455  EAL: Detected lcore 74 as core 2 on socket 1
00:08:35.455  EAL: Detected lcore 75 as core 3 on socket 1
00:08:35.455  EAL: Detected lcore 76 as core 4 on socket 1
00:08:35.455  EAL: Detected lcore 77 as core 5 on socket 1
00:08:35.455  EAL: Detected lcore 78 as core 6 on socket 1
00:08:35.455  EAL: Detected lcore 79 as core 8 on socket 1
00:08:35.455  EAL: Detected lcore 80 as core 9 on socket 1
00:08:35.455  EAL: Detected lcore 81 as core 10 on socket 1
00:08:35.455  EAL: Detected lcore 82 as core 11 on socket 1
00:08:35.455  EAL: Detected lcore 83 as core 12 on socket 1
00:08:35.455  EAL: Detected lcore 84 as core 13 on socket 1
00:08:35.455  EAL: Detected lcore 85 as core 16 on socket 1
00:08:35.455  EAL: Detected lcore 86 as core 17 on socket 1
00:08:35.455  EAL: Detected lcore 87 as core 18 on socket 1
00:08:35.455  EAL: Detected lcore 88 as core 19 on socket 1
00:08:35.455  EAL: Detected lcore 89 as core 20 on socket 1
00:08:35.455  EAL: Detected lcore 90 as core 21 on socket 1
00:08:35.455  EAL: Detected lcore 91 as core 25 on socket 1
00:08:35.455  EAL: Detected lcore 92 as core 26 on socket 1
00:08:35.455  EAL: Detected lcore 93 as core 27 on socket 1
00:08:35.455  EAL: Detected lcore 94 as core 28 on socket 1
00:08:35.455  EAL: Detected lcore 95 as core 29 on socket 1
00:08:35.455  EAL: Maximum logical cores by configuration: 128
00:08:35.455  EAL: Detected CPU lcores: 96
00:08:35.455  EAL: Detected NUMA nodes: 2
00:08:35.455  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:08:35.455  EAL: Detected shared linkage of DPDK
00:08:35.455  EAL: No shared files mode enabled, IPC will be disabled
00:08:35.455  EAL: Bus pci wants IOVA as 'DC'
00:08:35.455  EAL: Buses did not request a specific IOVA mode.
00:08:35.455  EAL: IOMMU is available, selecting IOVA as VA mode.
00:08:35.455  EAL: Selected IOVA mode 'VA'
00:08:35.455  EAL: Probing VFIO support...
00:08:35.455  EAL: IOMMU type 1 (Type 1) is supported
00:08:35.455  EAL: IOMMU type 7 (sPAPR) is not supported
00:08:35.455  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:08:35.455  EAL: VFIO support initialized
00:08:35.455  EAL: Ask a virtual area of 0x2e000 bytes
00:08:35.455  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:08:35.455  EAL: Setting up physically contiguous memory...
00:08:35.455  EAL: Setting maximum number of open files to 524288
00:08:35.455  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:08:35.455  EAL: Detected memory type: socket_id:1 hugepage_sz:2097152
00:08:35.455  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:08:35.455  EAL: Ask a virtual area of 0x61000 bytes
00:08:35.455  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:08:35.455  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:35.455  EAL: Ask a virtual area of 0x400000000 bytes
00:08:35.455  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:08:35.455  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:08:35.455  EAL: Ask a virtual area of 0x61000 bytes
00:08:35.455  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:08:35.455  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:35.455  EAL: Ask a virtual area of 0x400000000 bytes
00:08:35.455  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:08:35.455  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:08:35.455  EAL: Ask a virtual area of 0x61000 bytes
00:08:35.455  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:08:35.455  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:35.455  EAL: Ask a virtual area of 0x400000000 bytes
00:08:35.455  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:08:35.455  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:08:35.455  EAL: Ask a virtual area of 0x61000 bytes
00:08:35.455  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:08:35.455  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:35.455  EAL: Ask a virtual area of 0x400000000 bytes
00:08:35.455  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:08:35.455  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:08:35.455  EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152
00:08:35.455  EAL: Ask a virtual area of 0x61000 bytes
00:08:35.455  EAL: Virtual area found at 0x201000800000 (size = 0x61000)
00:08:35.455  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:08:35.455  EAL: Ask a virtual area of 0x400000000 bytes
00:08:35.455  EAL: Virtual area found at 0x201000a00000 (size = 0x400000000)
00:08:35.455  EAL: VA reserved for memseg list at 0x201000a00000, size 400000000
00:08:35.455  EAL: Ask a virtual area of 0x61000 bytes
00:08:35.455  EAL: Virtual area found at 0x201400a00000 (size = 0x61000)
00:08:35.455  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:08:35.455  EAL: Ask a virtual area of 0x400000000 bytes
00:08:35.455  EAL: Virtual area found at 0x201400c00000 (size = 0x400000000)
00:08:35.455  EAL: VA reserved for memseg list at 0x201400c00000, size 400000000
00:08:35.455  EAL: Ask a virtual area of 0x61000 bytes
00:08:35.455  EAL: Virtual area found at 0x201800c00000 (size = 0x61000)
00:08:35.455  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:08:35.455  EAL: Ask a virtual area of 0x400000000 bytes
00:08:35.455  EAL: Virtual area found at 0x201800e00000 (size = 0x400000000)
00:08:35.455  EAL: VA reserved for memseg list at 0x201800e00000, size 400000000
00:08:35.455  EAL: Ask a virtual area of 0x61000 bytes
00:08:35.455  EAL: Virtual area found at 0x201c00e00000 (size = 0x61000)
00:08:35.455  EAL: Memseg list allocated at socket 1, page size 0x800kB
00:08:35.455  EAL: Ask a virtual area of 0x400000000 bytes
00:08:35.455  EAL: Virtual area found at 0x201c01000000 (size = 0x400000000)
00:08:35.455  EAL: VA reserved for memseg list at 0x201c01000000, size 400000000
00:08:35.455  EAL: Hugepages will be freed exactly as allocated.
00:08:35.455  EAL: No shared files mode enabled, IPC is disabled
00:08:35.455  EAL: No shared files mode enabled, IPC is disabled
00:08:35.455  EAL: TSC frequency is ~2100000 KHz
00:08:35.455  EAL: Main lcore 0 is ready (tid=7fa4e3f77a00;cpuset=[0])
00:08:35.455  EAL: Trying to obtain current memory policy.
00:08:35.455  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:35.455  EAL: Restoring previous memory policy: 0
00:08:35.455  EAL: request: mp_malloc_sync
00:08:35.455  EAL: No shared files mode enabled, IPC is disabled
00:08:35.455  EAL: Heap on socket 0 was expanded by 2MB
00:08:35.455  EAL: No shared files mode enabled, IPC is disabled
00:08:35.714  EAL: No PCI address specified using 'addr=<id>' in: bus=pci
00:08:35.714  EAL: Mem event callback 'spdk:(nil)' registered
00:08:35.714  
00:08:35.714  
00:08:35.714       CUnit - A unit testing framework for C - Version 2.1-3
00:08:35.714       http://cunit.sourceforge.net/
00:08:35.714  
00:08:35.714  
00:08:35.714  Suite: components_suite
00:08:35.714    Test: vtophys_malloc_test ...passed
00:08:35.714    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:08:35.714  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:35.714  EAL: Restoring previous memory policy: 4
00:08:35.714  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.714  EAL: request: mp_malloc_sync
00:08:35.714  EAL: No shared files mode enabled, IPC is disabled
00:08:35.714  EAL: Heap on socket 0 was expanded by 4MB
00:08:35.714  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.714  EAL: request: mp_malloc_sync
00:08:35.714  EAL: No shared files mode enabled, IPC is disabled
00:08:35.714  EAL: Heap on socket 0 was shrunk by 4MB
00:08:35.714  EAL: Trying to obtain current memory policy.
00:08:35.714  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:35.714  EAL: Restoring previous memory policy: 4
00:08:35.714  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.714  EAL: request: mp_malloc_sync
00:08:35.714  EAL: No shared files mode enabled, IPC is disabled
00:08:35.714  EAL: Heap on socket 0 was expanded by 6MB
00:08:35.714  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.714  EAL: request: mp_malloc_sync
00:08:35.714  EAL: No shared files mode enabled, IPC is disabled
00:08:35.714  EAL: Heap on socket 0 was shrunk by 6MB
00:08:35.714  EAL: Trying to obtain current memory policy.
00:08:35.714  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:35.714  EAL: Restoring previous memory policy: 4
00:08:35.714  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.714  EAL: request: mp_malloc_sync
00:08:35.714  EAL: No shared files mode enabled, IPC is disabled
00:08:35.714  EAL: Heap on socket 0 was expanded by 10MB
00:08:35.714  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.714  EAL: request: mp_malloc_sync
00:08:35.714  EAL: No shared files mode enabled, IPC is disabled
00:08:35.714  EAL: Heap on socket 0 was shrunk by 10MB
00:08:35.714  EAL: Trying to obtain current memory policy.
00:08:35.714  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:35.714  EAL: Restoring previous memory policy: 4
00:08:35.714  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.714  EAL: request: mp_malloc_sync
00:08:35.714  EAL: No shared files mode enabled, IPC is disabled
00:08:35.714  EAL: Heap on socket 0 was expanded by 18MB
00:08:35.714  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.714  EAL: request: mp_malloc_sync
00:08:35.715  EAL: No shared files mode enabled, IPC is disabled
00:08:35.715  EAL: Heap on socket 0 was shrunk by 18MB
00:08:35.715  EAL: Trying to obtain current memory policy.
00:08:35.715  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:35.715  EAL: Restoring previous memory policy: 4
00:08:35.715  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.715  EAL: request: mp_malloc_sync
00:08:35.715  EAL: No shared files mode enabled, IPC is disabled
00:08:35.715  EAL: Heap on socket 0 was expanded by 34MB
00:08:35.715  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.715  EAL: request: mp_malloc_sync
00:08:35.715  EAL: No shared files mode enabled, IPC is disabled
00:08:35.715  EAL: Heap on socket 0 was shrunk by 34MB
00:08:35.715  EAL: Trying to obtain current memory policy.
00:08:35.715  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:35.715  EAL: Restoring previous memory policy: 4
00:08:35.715  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.715  EAL: request: mp_malloc_sync
00:08:35.715  EAL: No shared files mode enabled, IPC is disabled
00:08:35.715  EAL: Heap on socket 0 was expanded by 66MB
00:08:35.715  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.715  EAL: request: mp_malloc_sync
00:08:35.715  EAL: No shared files mode enabled, IPC is disabled
00:08:35.715  EAL: Heap on socket 0 was shrunk by 66MB
00:08:35.715  EAL: Trying to obtain current memory policy.
00:08:35.715  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:35.715  EAL: Restoring previous memory policy: 4
00:08:35.715  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.715  EAL: request: mp_malloc_sync
00:08:35.715  EAL: No shared files mode enabled, IPC is disabled
00:08:35.715  EAL: Heap on socket 0 was expanded by 130MB
00:08:35.715  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.715  EAL: request: mp_malloc_sync
00:08:35.715  EAL: No shared files mode enabled, IPC is disabled
00:08:35.715  EAL: Heap on socket 0 was shrunk by 130MB
00:08:35.715  EAL: Trying to obtain current memory policy.
00:08:35.715  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:35.715  EAL: Restoring previous memory policy: 4
00:08:35.715  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.715  EAL: request: mp_malloc_sync
00:08:35.715  EAL: No shared files mode enabled, IPC is disabled
00:08:35.715  EAL: Heap on socket 0 was expanded by 258MB
00:08:35.715  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.715  EAL: request: mp_malloc_sync
00:08:35.715  EAL: No shared files mode enabled, IPC is disabled
00:08:35.715  EAL: Heap on socket 0 was shrunk by 258MB
00:08:35.715  EAL: Trying to obtain current memory policy.
00:08:35.715  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:35.974  EAL: Restoring previous memory policy: 4
00:08:35.974  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.974  EAL: request: mp_malloc_sync
00:08:35.974  EAL: No shared files mode enabled, IPC is disabled
00:08:35.974  EAL: Heap on socket 0 was expanded by 514MB
00:08:35.974  EAL: Calling mem event callback 'spdk:(nil)'
00:08:35.974  EAL: request: mp_malloc_sync
00:08:35.974  EAL: No shared files mode enabled, IPC is disabled
00:08:35.974  EAL: Heap on socket 0 was shrunk by 514MB
00:08:35.974  EAL: Trying to obtain current memory policy.
00:08:35.974  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:36.232  EAL: Restoring previous memory policy: 4
00:08:36.232  EAL: Calling mem event callback 'spdk:(nil)'
00:08:36.232  EAL: request: mp_malloc_sync
00:08:36.232  EAL: No shared files mode enabled, IPC is disabled
00:08:36.232  EAL: Heap on socket 0 was expanded by 1026MB
00:08:36.491  EAL: Calling mem event callback 'spdk:(nil)'
00:08:36.491  EAL: request: mp_malloc_sync
00:08:36.491  EAL: No shared files mode enabled, IPC is disabled
00:08:36.491  EAL: Heap on socket 0 was shrunk by 1026MB
00:08:36.491  passed
00:08:36.491  
00:08:36.491  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:36.491                suites      1      1    n/a      0        0
00:08:36.491                 tests      2      2      2      0        0
00:08:36.491               asserts    497    497    497      0      n/a
00:08:36.491  
00:08:36.491  Elapsed time =    0.961 seconds
00:08:36.491  EAL: Calling mem event callback 'spdk:(nil)'
00:08:36.491  EAL: request: mp_malloc_sync
00:08:36.491  EAL: No shared files mode enabled, IPC is disabled
00:08:36.491  EAL: Heap on socket 0 was shrunk by 2MB
00:08:36.491  EAL: No shared files mode enabled, IPC is disabled
00:08:36.491  EAL: No shared files mode enabled, IPC is disabled
00:08:36.491  EAL: No shared files mode enabled, IPC is disabled
00:08:36.491  
00:08:36.491  real	0m1.092s
00:08:36.491  user	0m0.636s
00:08:36.491  sys	0m0.430s
00:08:36.491   23:49:52 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:36.491   23:49:52 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:08:36.491  ************************************
00:08:36.491  END TEST env_vtophys
00:08:36.491  ************************************
00:08:36.750   23:49:52 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut
00:08:36.750   23:49:52 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:36.750   23:49:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:36.750   23:49:52 env -- common/autotest_common.sh@10 -- # set +x
00:08:36.750  ************************************
00:08:36.750  START TEST env_pci
00:08:36.750  ************************************
00:08:36.750   23:49:52 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut
00:08:36.750  
00:08:36.750  
00:08:36.750       CUnit - A unit testing framework for C - Version 2.1-3
00:08:36.750       http://cunit.sourceforge.net/
00:08:36.750  
00:08:36.750  
00:08:36.750  Suite: pci
00:08:36.750    Test: pci_hook ...[2024-12-09 23:49:52.407812] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2895422 has claimed it
00:08:36.750  EAL: Cannot find device (10000:00:01.0)
00:08:36.750  EAL: Failed to attach device on primary process
00:08:36.750  passed
00:08:36.750  
00:08:36.750  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:36.750                suites      1      1    n/a      0        0
00:08:36.750                 tests      1      1      1      0        0
00:08:36.750               asserts     25     25     25      0      n/a
00:08:36.750  
00:08:36.750  Elapsed time =    0.026 seconds
00:08:36.750  
00:08:36.750  real	0m0.047s
00:08:36.750  user	0m0.014s
00:08:36.750  sys	0m0.033s
00:08:36.750   23:49:52 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:36.750   23:49:52 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:08:36.750  ************************************
00:08:36.750  END TEST env_pci
00:08:36.750  ************************************
00:08:36.750   23:49:52 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:08:36.750    23:49:52 env -- env/env.sh@15 -- # uname
00:08:36.750   23:49:52 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:08:36.750   23:49:52 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:08:36.750   23:49:52 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:08:36.750   23:49:52 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:08:36.750   23:49:52 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:36.750   23:49:52 env -- common/autotest_common.sh@10 -- # set +x
00:08:36.750  ************************************
00:08:36.750  START TEST env_dpdk_post_init
00:08:36.751  ************************************
00:08:36.751   23:49:52 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:08:36.751  EAL: Detected CPU lcores: 96
00:08:36.751  EAL: Detected NUMA nodes: 2
00:08:36.751  EAL: Detected shared linkage of DPDK
00:08:36.751  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:08:36.751  EAL: Selected IOVA mode 'VA'
00:08:36.751  EAL: VFIO support initialized
00:08:36.751  TELEMETRY: No legacy callbacks, legacy socket not created
00:08:37.010  EAL: Using IOMMU type 1 (Type 1)
00:08:37.010  EAL: Ignore mapping IO port bar(1)
00:08:37.010  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0)
00:08:37.010  EAL: Ignore mapping IO port bar(1)
00:08:37.010  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0)
00:08:37.010  EAL: Ignore mapping IO port bar(1)
00:08:37.010  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0)
00:08:37.010  EAL: Ignore mapping IO port bar(1)
00:08:37.010  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0)
00:08:37.010  EAL: Ignore mapping IO port bar(1)
00:08:37.010  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0)
00:08:37.010  EAL: Ignore mapping IO port bar(1)
00:08:37.010  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0)
00:08:37.010  EAL: Ignore mapping IO port bar(1)
00:08:37.010  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0)
00:08:37.010  EAL: Ignore mapping IO port bar(1)
00:08:37.010  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0)
00:08:37.947  EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0)
00:08:37.947  EAL: Ignore mapping IO port bar(1)
00:08:37.947  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1)
00:08:37.947  EAL: Ignore mapping IO port bar(1)
00:08:37.947  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1)
00:08:37.947  EAL: Ignore mapping IO port bar(1)
00:08:37.947  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1)
00:08:37.947  EAL: Ignore mapping IO port bar(1)
00:08:37.947  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1)
00:08:37.947  EAL: Ignore mapping IO port bar(1)
00:08:37.947  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1)
00:08:37.947  EAL: Ignore mapping IO port bar(1)
00:08:37.947  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1)
00:08:37.947  EAL: Ignore mapping IO port bar(1)
00:08:37.947  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1)
00:08:37.947  EAL: Ignore mapping IO port bar(1)
00:08:37.947  EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1)
00:08:41.232  EAL: Releasing PCI mapped resource for 0000:5e:00.0
00:08:41.232  EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000
00:08:41.232  Starting DPDK initialization...
00:08:41.232  Starting SPDK post initialization...
00:08:41.232  SPDK NVMe probe
00:08:41.232  Attaching to 0000:5e:00.0
00:08:41.232  Attached to 0000:5e:00.0
00:08:41.232  Cleaning up...
00:08:41.232  
00:08:41.232  real	0m4.409s
00:08:41.232  user	0m3.033s
00:08:41.232  sys	0m0.450s
00:08:41.232   23:49:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:41.232   23:49:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:08:41.232  ************************************
00:08:41.232  END TEST env_dpdk_post_init
00:08:41.232  ************************************
00:08:41.232    23:49:56 env -- env/env.sh@26 -- # uname
00:08:41.232   23:49:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:08:41.232   23:49:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:08:41.232   23:49:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:41.232   23:49:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:41.232   23:49:56 env -- common/autotest_common.sh@10 -- # set +x
00:08:41.232  ************************************
00:08:41.232  START TEST env_mem_callbacks
00:08:41.232  ************************************
00:08:41.232   23:49:57 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks
00:08:41.232  EAL: Detected CPU lcores: 96
00:08:41.232  EAL: Detected NUMA nodes: 2
00:08:41.232  EAL: Detected shared linkage of DPDK
00:08:41.232  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:08:41.232  EAL: Selected IOVA mode 'VA'
00:08:41.232  EAL: VFIO support initialized
00:08:41.232  TELEMETRY: No legacy callbacks, legacy socket not created
00:08:41.232  
00:08:41.232  
00:08:41.232       CUnit - A unit testing framework for C - Version 2.1-3
00:08:41.232       http://cunit.sourceforge.net/
00:08:41.232  
00:08:41.232  
00:08:41.232  Suite: memory
00:08:41.232    Test: test ...
00:08:41.232  register 0x200000200000 2097152
00:08:41.232  malloc 3145728
00:08:41.232  register 0x200000400000 4194304
00:08:41.232  buf 0x200000500000 len 3145728 PASSED
00:08:41.232  malloc 64
00:08:41.232  buf 0x2000004fff40 len 64 PASSED
00:08:41.232  malloc 4194304
00:08:41.232  register 0x200000800000 6291456
00:08:41.232  buf 0x200000a00000 len 4194304 PASSED
00:08:41.232  free 0x200000500000 3145728
00:08:41.232  free 0x2000004fff40 64
00:08:41.232  unregister 0x200000400000 4194304 PASSED
00:08:41.232  free 0x200000a00000 4194304
00:08:41.232  unregister 0x200000800000 6291456 PASSED
00:08:41.232  malloc 8388608
00:08:41.232  register 0x200000400000 10485760
00:08:41.232  buf 0x200000600000 len 8388608 PASSED
00:08:41.232  free 0x200000600000 8388608
00:08:41.232  unregister 0x200000400000 10485760 PASSED
00:08:41.232  passed
00:08:41.232  
00:08:41.232  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:41.232                suites      1      1    n/a      0        0
00:08:41.232                 tests      1      1      1      0        0
00:08:41.232               asserts     15     15     15      0      n/a
00:08:41.232  
00:08:41.232  Elapsed time =    0.008 seconds
00:08:41.232  
00:08:41.232  real	0m0.060s
00:08:41.232  user	0m0.023s
00:08:41.232  sys	0m0.038s
00:08:41.232   23:49:57 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:41.232   23:49:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:08:41.232  ************************************
00:08:41.232  END TEST env_mem_callbacks
00:08:41.232  ************************************
00:08:41.491  
00:08:41.491  real	0m6.288s
00:08:41.491  user	0m4.083s
00:08:41.491  sys	0m1.287s
00:08:41.491   23:49:57 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:41.491   23:49:57 env -- common/autotest_common.sh@10 -- # set +x
00:08:41.491  ************************************
00:08:41.491  END TEST env
00:08:41.491  ************************************
00:08:41.491   23:49:57  -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh
00:08:41.491   23:49:57  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:41.491   23:49:57  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:41.491   23:49:57  -- common/autotest_common.sh@10 -- # set +x
00:08:41.491  ************************************
00:08:41.491  START TEST rpc
00:08:41.491  ************************************
00:08:41.491   23:49:57 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh
00:08:41.491  * Looking for test storage...
00:08:41.491  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc
00:08:41.491    23:49:57 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:41.491     23:49:57 rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:08:41.491     23:49:57 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:41.491    23:49:57 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:41.491    23:49:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:41.491    23:49:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:41.491    23:49:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:41.491    23:49:57 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:08:41.491    23:49:57 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:08:41.491    23:49:57 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:08:41.491    23:49:57 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:08:41.491    23:49:57 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:08:41.491    23:49:57 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:08:41.491    23:49:57 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:08:41.491    23:49:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:41.491    23:49:57 rpc -- scripts/common.sh@344 -- # case "$op" in
00:08:41.491    23:49:57 rpc -- scripts/common.sh@345 -- # : 1
00:08:41.491    23:49:57 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:41.491    23:49:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:41.491     23:49:57 rpc -- scripts/common.sh@365 -- # decimal 1
00:08:41.491     23:49:57 rpc -- scripts/common.sh@353 -- # local d=1
00:08:41.491     23:49:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:41.491     23:49:57 rpc -- scripts/common.sh@355 -- # echo 1
00:08:41.491    23:49:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:08:41.491     23:49:57 rpc -- scripts/common.sh@366 -- # decimal 2
00:08:41.491     23:49:57 rpc -- scripts/common.sh@353 -- # local d=2
00:08:41.492     23:49:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:41.492     23:49:57 rpc -- scripts/common.sh@355 -- # echo 2
00:08:41.492    23:49:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:08:41.492    23:49:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:41.492    23:49:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:41.492    23:49:57 rpc -- scripts/common.sh@368 -- # return 0
00:08:41.492    23:49:57 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:41.492    23:49:57 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:41.492  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:41.492  		--rc genhtml_branch_coverage=1
00:08:41.492  		--rc genhtml_function_coverage=1
00:08:41.492  		--rc genhtml_legend=1
00:08:41.492  		--rc geninfo_all_blocks=1
00:08:41.492  		--rc geninfo_unexecuted_blocks=1
00:08:41.492  		
00:08:41.492  		'
00:08:41.492    23:49:57 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:41.492  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:41.492  		--rc genhtml_branch_coverage=1
00:08:41.492  		--rc genhtml_function_coverage=1
00:08:41.492  		--rc genhtml_legend=1
00:08:41.492  		--rc geninfo_all_blocks=1
00:08:41.492  		--rc geninfo_unexecuted_blocks=1
00:08:41.492  		
00:08:41.492  		'
00:08:41.492    23:49:57 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:41.492  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:41.492  		--rc genhtml_branch_coverage=1
00:08:41.492  		--rc genhtml_function_coverage=1
00:08:41.492  		--rc genhtml_legend=1
00:08:41.492  		--rc geninfo_all_blocks=1
00:08:41.492  		--rc geninfo_unexecuted_blocks=1
00:08:41.492  		
00:08:41.492  		'
00:08:41.492    23:49:57 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:41.492  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:41.492  		--rc genhtml_branch_coverage=1
00:08:41.492  		--rc genhtml_function_coverage=1
00:08:41.492  		--rc genhtml_legend=1
00:08:41.492  		--rc geninfo_all_blocks=1
00:08:41.492  		--rc geninfo_unexecuted_blocks=1
00:08:41.492  		
00:08:41.492  		'
00:08:41.492   23:49:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2896257
00:08:41.492   23:49:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:41.492   23:49:57 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev
00:08:41.492   23:49:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2896257
00:08:41.492   23:49:57 rpc -- common/autotest_common.sh@835 -- # '[' -z 2896257 ']'
00:08:41.492   23:49:57 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:41.492   23:49:57 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:41.492   23:49:57 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:41.492  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:41.492   23:49:57 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:41.492   23:49:57 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:41.750  [2024-12-09 23:49:57.399793] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:08:41.750  [2024-12-09 23:49:57.399839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896257 ]
00:08:41.750  [2024-12-09 23:49:57.471100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:41.750  [2024-12-09 23:49:57.510075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:08:41.750  [2024-12-09 23:49:57.510113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2896257' to capture a snapshot of events at runtime.
00:08:41.750  [2024-12-09 23:49:57.510121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:41.750  [2024-12-09 23:49:57.510126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:41.750  [2024-12-09 23:49:57.510131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2896257 for offline analysis/debug.
00:08:41.750  [2024-12-09 23:49:57.510637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:42.010   23:49:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:42.010   23:49:57 rpc -- common/autotest_common.sh@868 -- # return 0
00:08:42.010   23:49:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc
00:08:42.010   23:49:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc
00:08:42.010   23:49:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:08:42.010   23:49:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:08:42.010   23:49:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:42.010   23:49:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:42.010   23:49:57 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:42.010  ************************************
00:08:42.010  START TEST rpc_integrity
00:08:42.010  ************************************
00:08:42.010   23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:08:42.010    23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:08:42.010    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.010    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:42.010    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.010   23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:08:42.010    23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:08:42.010   23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:08:42.010    23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:08:42.010    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.010    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:42.010    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.010   23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:08:42.010    23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:08:42.010    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.010    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:42.010    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.010   23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:08:42.010  {
00:08:42.010  "name": "Malloc0",
00:08:42.010  "aliases": [
00:08:42.010  "a7947e98-844e-4999-a4e5-0d9df1e1cb8a"
00:08:42.010  ],
00:08:42.010  "product_name": "Malloc disk",
00:08:42.010  "block_size": 512,
00:08:42.010  "num_blocks": 16384,
00:08:42.010  "uuid": "a7947e98-844e-4999-a4e5-0d9df1e1cb8a",
00:08:42.010  "assigned_rate_limits": {
00:08:42.010  "rw_ios_per_sec": 0,
00:08:42.010  "rw_mbytes_per_sec": 0,
00:08:42.010  "r_mbytes_per_sec": 0,
00:08:42.010  "w_mbytes_per_sec": 0
00:08:42.010  },
00:08:42.010  "claimed": false,
00:08:42.010  "zoned": false,
00:08:42.010  "supported_io_types": {
00:08:42.010  "read": true,
00:08:42.010  "write": true,
00:08:42.010  "unmap": true,
00:08:42.010  "flush": true,
00:08:42.010  "reset": true,
00:08:42.010  "nvme_admin": false,
00:08:42.010  "nvme_io": false,
00:08:42.010  "nvme_io_md": false,
00:08:42.010  "write_zeroes": true,
00:08:42.010  "zcopy": true,
00:08:42.010  "get_zone_info": false,
00:08:42.010  "zone_management": false,
00:08:42.010  "zone_append": false,
00:08:42.010  "compare": false,
00:08:42.010  "compare_and_write": false,
00:08:42.010  "abort": true,
00:08:42.010  "seek_hole": false,
00:08:42.010  "seek_data": false,
00:08:42.010  "copy": true,
00:08:42.010  "nvme_iov_md": false
00:08:42.010  },
00:08:42.010  "memory_domains": [
00:08:42.010  {
00:08:42.010  "dma_device_id": "system",
00:08:42.010  "dma_device_type": 1
00:08:42.010  },
00:08:42.010  {
00:08:42.010  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:42.010  "dma_device_type": 2
00:08:42.010  }
00:08:42.010  ],
00:08:42.010  "driver_specific": {}
00:08:42.010  }
00:08:42.010  ]'
00:08:42.010    23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:08:42.269   23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:08:42.269   23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:08:42.269   23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.269   23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:42.269  [2024-12-09 23:49:57.895664] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:08:42.269  [2024-12-09 23:49:57.895696] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:42.269  [2024-12-09 23:49:57.895710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10e6740
00:08:42.269  [2024-12-09 23:49:57.895716] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:42.269  [2024-12-09 23:49:57.896792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:42.269  [2024-12-09 23:49:57.896814] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:08:42.269  Passthru0
00:08:42.269   23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.269    23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:08:42.269    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.269    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:42.269    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.269   23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:08:42.269  {
00:08:42.269  "name": "Malloc0",
00:08:42.269  "aliases": [
00:08:42.269  "a7947e98-844e-4999-a4e5-0d9df1e1cb8a"
00:08:42.269  ],
00:08:42.269  "product_name": "Malloc disk",
00:08:42.269  "block_size": 512,
00:08:42.269  "num_blocks": 16384,
00:08:42.269  "uuid": "a7947e98-844e-4999-a4e5-0d9df1e1cb8a",
00:08:42.269  "assigned_rate_limits": {
00:08:42.269  "rw_ios_per_sec": 0,
00:08:42.269  "rw_mbytes_per_sec": 0,
00:08:42.269  "r_mbytes_per_sec": 0,
00:08:42.269  "w_mbytes_per_sec": 0
00:08:42.269  },
00:08:42.269  "claimed": true,
00:08:42.269  "claim_type": "exclusive_write",
00:08:42.269  "zoned": false,
00:08:42.269  "supported_io_types": {
00:08:42.269  "read": true,
00:08:42.269  "write": true,
00:08:42.269  "unmap": true,
00:08:42.269  "flush": true,
00:08:42.269  "reset": true,
00:08:42.269  "nvme_admin": false,
00:08:42.269  "nvme_io": false,
00:08:42.269  "nvme_io_md": false,
00:08:42.269  "write_zeroes": true,
00:08:42.269  "zcopy": true,
00:08:42.269  "get_zone_info": false,
00:08:42.269  "zone_management": false,
00:08:42.269  "zone_append": false,
00:08:42.269  "compare": false,
00:08:42.270  "compare_and_write": false,
00:08:42.270  "abort": true,
00:08:42.270  "seek_hole": false,
00:08:42.270  "seek_data": false,
00:08:42.270  "copy": true,
00:08:42.270  "nvme_iov_md": false
00:08:42.270  },
00:08:42.270  "memory_domains": [
00:08:42.270  {
00:08:42.270  "dma_device_id": "system",
00:08:42.270  "dma_device_type": 1
00:08:42.270  },
00:08:42.270  {
00:08:42.270  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:42.270  "dma_device_type": 2
00:08:42.270  }
00:08:42.270  ],
00:08:42.270  "driver_specific": {}
00:08:42.270  },
00:08:42.270  {
00:08:42.270  "name": "Passthru0",
00:08:42.270  "aliases": [
00:08:42.270  "bb20ebd5-bbfb-5f02-9dee-6c3b2d26289e"
00:08:42.270  ],
00:08:42.270  "product_name": "passthru",
00:08:42.270  "block_size": 512,
00:08:42.270  "num_blocks": 16384,
00:08:42.270  "uuid": "bb20ebd5-bbfb-5f02-9dee-6c3b2d26289e",
00:08:42.270  "assigned_rate_limits": {
00:08:42.270  "rw_ios_per_sec": 0,
00:08:42.270  "rw_mbytes_per_sec": 0,
00:08:42.270  "r_mbytes_per_sec": 0,
00:08:42.270  "w_mbytes_per_sec": 0
00:08:42.270  },
00:08:42.270  "claimed": false,
00:08:42.270  "zoned": false,
00:08:42.270  "supported_io_types": {
00:08:42.270  "read": true,
00:08:42.270  "write": true,
00:08:42.270  "unmap": true,
00:08:42.270  "flush": true,
00:08:42.270  "reset": true,
00:08:42.270  "nvme_admin": false,
00:08:42.270  "nvme_io": false,
00:08:42.270  "nvme_io_md": false,
00:08:42.270  "write_zeroes": true,
00:08:42.270  "zcopy": true,
00:08:42.270  "get_zone_info": false,
00:08:42.270  "zone_management": false,
00:08:42.270  "zone_append": false,
00:08:42.270  "compare": false,
00:08:42.270  "compare_and_write": false,
00:08:42.270  "abort": true,
00:08:42.270  "seek_hole": false,
00:08:42.270  "seek_data": false,
00:08:42.270  "copy": true,
00:08:42.270  "nvme_iov_md": false
00:08:42.270  },
00:08:42.270  "memory_domains": [
00:08:42.270  {
00:08:42.270  "dma_device_id": "system",
00:08:42.270  "dma_device_type": 1
00:08:42.270  },
00:08:42.270  {
00:08:42.270  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:42.270  "dma_device_type": 2
00:08:42.270  }
00:08:42.270  ],
00:08:42.270  "driver_specific": {
00:08:42.270  "passthru": {
00:08:42.270  "name": "Passthru0",
00:08:42.270  "base_bdev_name": "Malloc0"
00:08:42.270  }
00:08:42.270  }
00:08:42.270  }
00:08:42.270  ]'
00:08:42.270    23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:08:42.270   23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:08:42.270   23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:08:42.270   23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.270   23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:42.270   23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.270   23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:08:42.270   23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.270   23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:42.270   23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.270    23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:08:42.270    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.270    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:42.270    23:49:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.270   23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:08:42.270    23:49:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:08:42.270   23:49:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:08:42.270  
00:08:42.270  real	0m0.248s
00:08:42.270  user	0m0.146s
00:08:42.270  sys	0m0.040s
00:08:42.270   23:49:58 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:42.270   23:49:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:42.270  ************************************
00:08:42.270  END TEST rpc_integrity
00:08:42.270  ************************************
00:08:42.270   23:49:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:08:42.270   23:49:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:42.270   23:49:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:42.270   23:49:58 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:42.270  ************************************
00:08:42.270  START TEST rpc_plugins
00:08:42.270  ************************************
00:08:42.270   23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:08:42.270    23:49:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:08:42.270    23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.270    23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:42.270    23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.270   23:49:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:08:42.270    23:49:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:08:42.270    23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.270    23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:42.270    23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.270   23:49:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:08:42.270  {
00:08:42.270  "name": "Malloc1",
00:08:42.270  "aliases": [
00:08:42.270  "50e1bff1-347a-482d-bc75-2be58af0110e"
00:08:42.270  ],
00:08:42.270  "product_name": "Malloc disk",
00:08:42.270  "block_size": 4096,
00:08:42.270  "num_blocks": 256,
00:08:42.270  "uuid": "50e1bff1-347a-482d-bc75-2be58af0110e",
00:08:42.270  "assigned_rate_limits": {
00:08:42.270  "rw_ios_per_sec": 0,
00:08:42.270  "rw_mbytes_per_sec": 0,
00:08:42.270  "r_mbytes_per_sec": 0,
00:08:42.270  "w_mbytes_per_sec": 0
00:08:42.270  },
00:08:42.270  "claimed": false,
00:08:42.270  "zoned": false,
00:08:42.270  "supported_io_types": {
00:08:42.270  "read": true,
00:08:42.270  "write": true,
00:08:42.270  "unmap": true,
00:08:42.270  "flush": true,
00:08:42.270  "reset": true,
00:08:42.270  "nvme_admin": false,
00:08:42.270  "nvme_io": false,
00:08:42.270  "nvme_io_md": false,
00:08:42.270  "write_zeroes": true,
00:08:42.270  "zcopy": true,
00:08:42.270  "get_zone_info": false,
00:08:42.270  "zone_management": false,
00:08:42.270  "zone_append": false,
00:08:42.270  "compare": false,
00:08:42.270  "compare_and_write": false,
00:08:42.270  "abort": true,
00:08:42.270  "seek_hole": false,
00:08:42.270  "seek_data": false,
00:08:42.270  "copy": true,
00:08:42.270  "nvme_iov_md": false
00:08:42.270  },
00:08:42.270  "memory_domains": [
00:08:42.270  {
00:08:42.270  "dma_device_id": "system",
00:08:42.270  "dma_device_type": 1
00:08:42.270  },
00:08:42.270  {
00:08:42.270  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:42.270  "dma_device_type": 2
00:08:42.270  }
00:08:42.270  ],
00:08:42.270  "driver_specific": {}
00:08:42.270  }
00:08:42.270  ]'
00:08:42.270    23:49:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:08:42.529   23:49:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:08:42.529   23:49:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:08:42.529   23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.529   23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:42.529   23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.529    23:49:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:08:42.529    23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.529    23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:42.529    23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.529   23:49:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:08:42.529    23:49:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:08:42.529   23:49:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:08:42.529  
00:08:42.529  real	0m0.141s
00:08:42.529  user	0m0.084s
00:08:42.529  sys	0m0.022s
00:08:42.529   23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:42.529   23:49:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:42.529  ************************************
00:08:42.529  END TEST rpc_plugins
00:08:42.529  ************************************
00:08:42.529   23:49:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:08:42.529   23:49:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:42.529   23:49:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:42.529   23:49:58 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:42.529  ************************************
00:08:42.529  START TEST rpc_trace_cmd_test
00:08:42.529  ************************************
00:08:42.529   23:49:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:08:42.529   23:49:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:08:42.529    23:49:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:08:42.529    23:49:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.529    23:49:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:08:42.529    23:49:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.529   23:49:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:08:42.529  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2896257",
00:08:42.529  "tpoint_group_mask": "0x8",
00:08:42.529  "iscsi_conn": {
00:08:42.529  "mask": "0x2",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "scsi": {
00:08:42.529  "mask": "0x4",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "bdev": {
00:08:42.529  "mask": "0x8",
00:08:42.529  "tpoint_mask": "0xffffffffffffffff"
00:08:42.529  },
00:08:42.529  "nvmf_rdma": {
00:08:42.529  "mask": "0x10",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "nvmf_tcp": {
00:08:42.529  "mask": "0x20",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "ftl": {
00:08:42.529  "mask": "0x40",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "blobfs": {
00:08:42.529  "mask": "0x80",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "dsa": {
00:08:42.529  "mask": "0x200",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "thread": {
00:08:42.529  "mask": "0x400",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "nvme_pcie": {
00:08:42.529  "mask": "0x800",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "iaa": {
00:08:42.529  "mask": "0x1000",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "nvme_tcp": {
00:08:42.529  "mask": "0x2000",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "bdev_nvme": {
00:08:42.529  "mask": "0x4000",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "sock": {
00:08:42.529  "mask": "0x8000",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "blob": {
00:08:42.529  "mask": "0x10000",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "bdev_raid": {
00:08:42.529  "mask": "0x20000",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  },
00:08:42.529  "scheduler": {
00:08:42.529  "mask": "0x40000",
00:08:42.529  "tpoint_mask": "0x0"
00:08:42.529  }
00:08:42.529  }'
00:08:42.529    23:49:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:08:42.529   23:49:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:08:42.529    23:49:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:08:42.787   23:49:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:08:42.787    23:49:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:08:42.787   23:49:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:08:42.787    23:49:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:08:42.787   23:49:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:08:42.787    23:49:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:08:42.787   23:49:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:08:42.787  
00:08:42.787  real	0m0.225s
00:08:42.787  user	0m0.186s
00:08:42.787  sys	0m0.032s
00:08:42.787   23:49:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:42.787   23:49:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:08:42.787  ************************************
00:08:42.787  END TEST rpc_trace_cmd_test
00:08:42.787  ************************************
00:08:42.787   23:49:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:08:42.787   23:49:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:08:42.787   23:49:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:08:42.787   23:49:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:42.787   23:49:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:42.787   23:49:58 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:42.787  ************************************
00:08:42.787  START TEST rpc_daemon_integrity
00:08:42.787  ************************************
00:08:42.787   23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:08:42.787    23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:08:42.787    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.787    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:42.787    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:42.787   23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:08:42.787    23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:08:42.787   23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:08:42.787    23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:08:42.787    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:42.787    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:43.045    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:43.045   23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:08:43.045    23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:08:43.045    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:43.045    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:43.045    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:43.045   23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:08:43.045  {
00:08:43.045  "name": "Malloc2",
00:08:43.045  "aliases": [
00:08:43.045  "bfaa1ca6-d007-45c1-8197-44ed65b1ecbf"
00:08:43.045  ],
00:08:43.045  "product_name": "Malloc disk",
00:08:43.045  "block_size": 512,
00:08:43.045  "num_blocks": 16384,
00:08:43.045  "uuid": "bfaa1ca6-d007-45c1-8197-44ed65b1ecbf",
00:08:43.045  "assigned_rate_limits": {
00:08:43.045  "rw_ios_per_sec": 0,
00:08:43.045  "rw_mbytes_per_sec": 0,
00:08:43.045  "r_mbytes_per_sec": 0,
00:08:43.045  "w_mbytes_per_sec": 0
00:08:43.045  },
00:08:43.045  "claimed": false,
00:08:43.045  "zoned": false,
00:08:43.045  "supported_io_types": {
00:08:43.045  "read": true,
00:08:43.045  "write": true,
00:08:43.045  "unmap": true,
00:08:43.045  "flush": true,
00:08:43.045  "reset": true,
00:08:43.045  "nvme_admin": false,
00:08:43.045  "nvme_io": false,
00:08:43.045  "nvme_io_md": false,
00:08:43.045  "write_zeroes": true,
00:08:43.045  "zcopy": true,
00:08:43.045  "get_zone_info": false,
00:08:43.045  "zone_management": false,
00:08:43.045  "zone_append": false,
00:08:43.045  "compare": false,
00:08:43.045  "compare_and_write": false,
00:08:43.045  "abort": true,
00:08:43.045  "seek_hole": false,
00:08:43.045  "seek_data": false,
00:08:43.045  "copy": true,
00:08:43.045  "nvme_iov_md": false
00:08:43.045  },
00:08:43.045  "memory_domains": [
00:08:43.045  {
00:08:43.045  "dma_device_id": "system",
00:08:43.045  "dma_device_type": 1
00:08:43.045  },
00:08:43.045  {
00:08:43.045  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:43.045  "dma_device_type": 2
00:08:43.045  }
00:08:43.045  ],
00:08:43.045  "driver_specific": {}
00:08:43.045  }
00:08:43.045  ]'
00:08:43.045    23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:08:43.045   23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:43.046  [2024-12-09 23:49:58.721885] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:08:43.046  [2024-12-09 23:49:58.721913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:43.046  [2024-12-09 23:49:58.721925] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10b3fe0
00:08:43.046  [2024-12-09 23:49:58.721931] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:43.046  [2024-12-09 23:49:58.722884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:43.046  [2024-12-09 23:49:58.722908] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:08:43.046  Passthru0
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:43.046    23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:08:43.046    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:43.046    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:43.046    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:08:43.046  {
00:08:43.046  "name": "Malloc2",
00:08:43.046  "aliases": [
00:08:43.046  "bfaa1ca6-d007-45c1-8197-44ed65b1ecbf"
00:08:43.046  ],
00:08:43.046  "product_name": "Malloc disk",
00:08:43.046  "block_size": 512,
00:08:43.046  "num_blocks": 16384,
00:08:43.046  "uuid": "bfaa1ca6-d007-45c1-8197-44ed65b1ecbf",
00:08:43.046  "assigned_rate_limits": {
00:08:43.046  "rw_ios_per_sec": 0,
00:08:43.046  "rw_mbytes_per_sec": 0,
00:08:43.046  "r_mbytes_per_sec": 0,
00:08:43.046  "w_mbytes_per_sec": 0
00:08:43.046  },
00:08:43.046  "claimed": true,
00:08:43.046  "claim_type": "exclusive_write",
00:08:43.046  "zoned": false,
00:08:43.046  "supported_io_types": {
00:08:43.046  "read": true,
00:08:43.046  "write": true,
00:08:43.046  "unmap": true,
00:08:43.046  "flush": true,
00:08:43.046  "reset": true,
00:08:43.046  "nvme_admin": false,
00:08:43.046  "nvme_io": false,
00:08:43.046  "nvme_io_md": false,
00:08:43.046  "write_zeroes": true,
00:08:43.046  "zcopy": true,
00:08:43.046  "get_zone_info": false,
00:08:43.046  "zone_management": false,
00:08:43.046  "zone_append": false,
00:08:43.046  "compare": false,
00:08:43.046  "compare_and_write": false,
00:08:43.046  "abort": true,
00:08:43.046  "seek_hole": false,
00:08:43.046  "seek_data": false,
00:08:43.046  "copy": true,
00:08:43.046  "nvme_iov_md": false
00:08:43.046  },
00:08:43.046  "memory_domains": [
00:08:43.046  {
00:08:43.046  "dma_device_id": "system",
00:08:43.046  "dma_device_type": 1
00:08:43.046  },
00:08:43.046  {
00:08:43.046  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:43.046  "dma_device_type": 2
00:08:43.046  }
00:08:43.046  ],
00:08:43.046  "driver_specific": {}
00:08:43.046  },
00:08:43.046  {
00:08:43.046  "name": "Passthru0",
00:08:43.046  "aliases": [
00:08:43.046  "06bb4d03-74d9-59b2-901b-316cd737c503"
00:08:43.046  ],
00:08:43.046  "product_name": "passthru",
00:08:43.046  "block_size": 512,
00:08:43.046  "num_blocks": 16384,
00:08:43.046  "uuid": "06bb4d03-74d9-59b2-901b-316cd737c503",
00:08:43.046  "assigned_rate_limits": {
00:08:43.046  "rw_ios_per_sec": 0,
00:08:43.046  "rw_mbytes_per_sec": 0,
00:08:43.046  "r_mbytes_per_sec": 0,
00:08:43.046  "w_mbytes_per_sec": 0
00:08:43.046  },
00:08:43.046  "claimed": false,
00:08:43.046  "zoned": false,
00:08:43.046  "supported_io_types": {
00:08:43.046  "read": true,
00:08:43.046  "write": true,
00:08:43.046  "unmap": true,
00:08:43.046  "flush": true,
00:08:43.046  "reset": true,
00:08:43.046  "nvme_admin": false,
00:08:43.046  "nvme_io": false,
00:08:43.046  "nvme_io_md": false,
00:08:43.046  "write_zeroes": true,
00:08:43.046  "zcopy": true,
00:08:43.046  "get_zone_info": false,
00:08:43.046  "zone_management": false,
00:08:43.046  "zone_append": false,
00:08:43.046  "compare": false,
00:08:43.046  "compare_and_write": false,
00:08:43.046  "abort": true,
00:08:43.046  "seek_hole": false,
00:08:43.046  "seek_data": false,
00:08:43.046  "copy": true,
00:08:43.046  "nvme_iov_md": false
00:08:43.046  },
00:08:43.046  "memory_domains": [
00:08:43.046  {
00:08:43.046  "dma_device_id": "system",
00:08:43.046  "dma_device_type": 1
00:08:43.046  },
00:08:43.046  {
00:08:43.046  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:43.046  "dma_device_type": 2
00:08:43.046  }
00:08:43.046  ],
00:08:43.046  "driver_specific": {
00:08:43.046  "passthru": {
00:08:43.046  "name": "Passthru0",
00:08:43.046  "base_bdev_name": "Malloc2"
00:08:43.046  }
00:08:43.046  }
00:08:43.046  }
00:08:43.046  ]'
00:08:43.046    23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:43.046    23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:08:43.046    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:43.046    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:43.046    23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:08:43.046    23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:08:43.046  
00:08:43.046  real	0m0.271s
00:08:43.046  user	0m0.163s
00:08:43.046  sys	0m0.038s
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:43.046   23:49:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:43.046  ************************************
00:08:43.046  END TEST rpc_daemon_integrity
00:08:43.046  ************************************
00:08:43.046   23:49:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:08:43.046   23:49:58 rpc -- rpc/rpc.sh@84 -- # killprocess 2896257
00:08:43.046   23:49:58 rpc -- common/autotest_common.sh@954 -- # '[' -z 2896257 ']'
00:08:43.046   23:49:58 rpc -- common/autotest_common.sh@958 -- # kill -0 2896257
00:08:43.046    23:49:58 rpc -- common/autotest_common.sh@959 -- # uname
00:08:43.046   23:49:58 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:43.046    23:49:58 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2896257
00:08:43.304   23:49:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:43.304   23:49:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:43.304   23:49:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2896257'
00:08:43.304  killing process with pid 2896257
00:08:43.304   23:49:58 rpc -- common/autotest_common.sh@973 -- # kill 2896257
00:08:43.304   23:49:58 rpc -- common/autotest_common.sh@978 -- # wait 2896257
00:08:43.563  
00:08:43.563  real	0m2.071s
00:08:43.563  user	0m2.625s
00:08:43.563  sys	0m0.689s
00:08:43.563   23:49:59 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:43.563   23:49:59 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:43.563  ************************************
00:08:43.563  END TEST rpc
00:08:43.563  ************************************
00:08:43.563   23:49:59  -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:08:43.563   23:49:59  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:43.563   23:49:59  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:43.563   23:49:59  -- common/autotest_common.sh@10 -- # set +x
00:08:43.563  ************************************
00:08:43.563  START TEST skip_rpc
00:08:43.563  ************************************
00:08:43.563   23:49:59 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh
00:08:43.563  * Looking for test storage...
00:08:43.563  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc
00:08:43.563    23:49:59 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:43.563     23:49:59 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:08:43.563     23:49:59 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:43.822    23:49:59 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@345 -- # : 1
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:43.822     23:49:59 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:08:43.822     23:49:59 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:08:43.822     23:49:59 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:43.822     23:49:59 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:08:43.822     23:49:59 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:08:43.822     23:49:59 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:08:43.822     23:49:59 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:43.822     23:49:59 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:43.822    23:49:59 skip_rpc -- scripts/common.sh@368 -- # return 0
00:08:43.822    23:49:59 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:43.822    23:49:59 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:43.822  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.822  		--rc genhtml_branch_coverage=1
00:08:43.822  		--rc genhtml_function_coverage=1
00:08:43.822  		--rc genhtml_legend=1
00:08:43.822  		--rc geninfo_all_blocks=1
00:08:43.822  		--rc geninfo_unexecuted_blocks=1
00:08:43.822  		
00:08:43.822  		'
00:08:43.822    23:49:59 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:43.822  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.822  		--rc genhtml_branch_coverage=1
00:08:43.822  		--rc genhtml_function_coverage=1
00:08:43.822  		--rc genhtml_legend=1
00:08:43.822  		--rc geninfo_all_blocks=1
00:08:43.822  		--rc geninfo_unexecuted_blocks=1
00:08:43.822  		
00:08:43.822  		'
00:08:43.822    23:49:59 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:43.822  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.822  		--rc genhtml_branch_coverage=1
00:08:43.822  		--rc genhtml_function_coverage=1
00:08:43.822  		--rc genhtml_legend=1
00:08:43.822  		--rc geninfo_all_blocks=1
00:08:43.822  		--rc geninfo_unexecuted_blocks=1
00:08:43.822  		
00:08:43.822  		'
00:08:43.822    23:49:59 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:43.822  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:43.822  		--rc genhtml_branch_coverage=1
00:08:43.823  		--rc genhtml_function_coverage=1
00:08:43.823  		--rc genhtml_legend=1
00:08:43.823  		--rc geninfo_all_blocks=1
00:08:43.823  		--rc geninfo_unexecuted_blocks=1
00:08:43.823  		
00:08:43.823  		'
00:08:43.823   23:49:59 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json
00:08:43.823   23:49:59 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt
00:08:43.823   23:49:59 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:08:43.823   23:49:59 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:43.823   23:49:59 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:43.823   23:49:59 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:43.823  ************************************
00:08:43.823  START TEST skip_rpc
00:08:43.823  ************************************
00:08:43.823   23:49:59 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:08:43.823   23:49:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2896880
00:08:43.823   23:49:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:43.823   23:49:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:08:43.823   23:49:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:08:43.823  [2024-12-09 23:49:59.574693] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:08:43.823  [2024-12-09 23:49:59.574730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896880 ]
00:08:43.823  [2024-12-09 23:49:59.644625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:44.081  [2024-12-09 23:49:59.683691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:49.423    23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2896880
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2896880 ']'
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2896880
00:08:49.423    23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:49.423    23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2896880
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2896880'
00:08:49.423  killing process with pid 2896880
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2896880
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2896880
00:08:49.423  
00:08:49.423  real	0m5.358s
00:08:49.423  user	0m5.120s
00:08:49.423  sys	0m0.274s
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:49.423   23:50:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:49.423  ************************************
00:08:49.423  END TEST skip_rpc
00:08:49.423  ************************************
00:08:49.423   23:50:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:08:49.423   23:50:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:49.423   23:50:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:49.423   23:50:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:49.423  ************************************
00:08:49.423  START TEST skip_rpc_with_json
00:08:49.423  ************************************
00:08:49.423   23:50:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:08:49.423   23:50:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:08:49.423   23:50:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2897805
00:08:49.423   23:50:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:49.423   23:50:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:08:49.423   23:50:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2897805
00:08:49.423   23:50:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2897805 ']'
00:08:49.423   23:50:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:49.423   23:50:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:49.423   23:50:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:49.423  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:49.423   23:50:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:49.423   23:50:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:08:49.423  [2024-12-09 23:50:05.004384] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:08:49.423  [2024-12-09 23:50:05.004438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897805 ]
00:08:49.423  [2024-12-09 23:50:05.076820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:49.423  [2024-12-09 23:50:05.117266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:49.681   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:49.681   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:08:49.681   23:50:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:08:49.681   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:49.681   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:08:49.681  [2024-12-09 23:50:05.333676] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:08:49.681  request:
00:08:49.681  {
00:08:49.681  "trtype": "tcp",
00:08:49.681  "method": "nvmf_get_transports",
00:08:49.681  "req_id": 1
00:08:49.681  }
00:08:49.681  Got JSON-RPC error response
00:08:49.681  response:
00:08:49.681  {
00:08:49.681  "code": -19,
00:08:49.681  "message": "No such device"
00:08:49.681  }
00:08:49.681   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:08:49.682   23:50:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:08:49.682   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:49.682   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:08:49.682  [2024-12-09 23:50:05.345787] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:08:49.682   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:49.682   23:50:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:08:49.682   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:49.682   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:08:49.682   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:49.682   23:50:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json
00:08:49.682  {
00:08:49.682  "subsystems": [
00:08:49.682  {
00:08:49.682  "subsystem": "fsdev",
00:08:49.682  "config": [
00:08:49.682  {
00:08:49.682  "method": "fsdev_set_opts",
00:08:49.682  "params": {
00:08:49.682  "fsdev_io_pool_size": 65535,
00:08:49.682  "fsdev_io_cache_size": 256
00:08:49.682  }
00:08:49.682  }
00:08:49.682  ]
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "vfio_user_target",
00:08:49.682  "config": null
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "keyring",
00:08:49.682  "config": []
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "iobuf",
00:08:49.682  "config": [
00:08:49.682  {
00:08:49.682  "method": "iobuf_set_options",
00:08:49.682  "params": {
00:08:49.682  "small_pool_count": 8192,
00:08:49.682  "large_pool_count": 1024,
00:08:49.682  "small_bufsize": 8192,
00:08:49.682  "large_bufsize": 135168,
00:08:49.682  "enable_numa": false
00:08:49.682  }
00:08:49.682  }
00:08:49.682  ]
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "sock",
00:08:49.682  "config": [
00:08:49.682  {
00:08:49.682  "method": "sock_set_default_impl",
00:08:49.682  "params": {
00:08:49.682  "impl_name": "posix"
00:08:49.682  }
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "method": "sock_impl_set_options",
00:08:49.682  "params": {
00:08:49.682  "impl_name": "ssl",
00:08:49.682  "recv_buf_size": 4096,
00:08:49.682  "send_buf_size": 4096,
00:08:49.682  "enable_recv_pipe": true,
00:08:49.682  "enable_quickack": false,
00:08:49.682  "enable_placement_id": 0,
00:08:49.682  "enable_zerocopy_send_server": true,
00:08:49.682  "enable_zerocopy_send_client": false,
00:08:49.682  "zerocopy_threshold": 0,
00:08:49.682  "tls_version": 0,
00:08:49.682  "enable_ktls": false
00:08:49.682  }
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "method": "sock_impl_set_options",
00:08:49.682  "params": {
00:08:49.682  "impl_name": "posix",
00:08:49.682  "recv_buf_size": 2097152,
00:08:49.682  "send_buf_size": 2097152,
00:08:49.682  "enable_recv_pipe": true,
00:08:49.682  "enable_quickack": false,
00:08:49.682  "enable_placement_id": 0,
00:08:49.682  "enable_zerocopy_send_server": true,
00:08:49.682  "enable_zerocopy_send_client": false,
00:08:49.682  "zerocopy_threshold": 0,
00:08:49.682  "tls_version": 0,
00:08:49.682  "enable_ktls": false
00:08:49.682  }
00:08:49.682  }
00:08:49.682  ]
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "vmd",
00:08:49.682  "config": []
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "accel",
00:08:49.682  "config": [
00:08:49.682  {
00:08:49.682  "method": "accel_set_options",
00:08:49.682  "params": {
00:08:49.682  "small_cache_size": 128,
00:08:49.682  "large_cache_size": 16,
00:08:49.682  "task_count": 2048,
00:08:49.682  "sequence_count": 2048,
00:08:49.682  "buf_count": 2048
00:08:49.682  }
00:08:49.682  }
00:08:49.682  ]
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "bdev",
00:08:49.682  "config": [
00:08:49.682  {
00:08:49.682  "method": "bdev_set_options",
00:08:49.682  "params": {
00:08:49.682  "bdev_io_pool_size": 65535,
00:08:49.682  "bdev_io_cache_size": 256,
00:08:49.682  "bdev_auto_examine": true,
00:08:49.682  "iobuf_small_cache_size": 128,
00:08:49.682  "iobuf_large_cache_size": 16
00:08:49.682  }
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "method": "bdev_raid_set_options",
00:08:49.682  "params": {
00:08:49.682  "process_window_size_kb": 1024,
00:08:49.682  "process_max_bandwidth_mb_sec": 0
00:08:49.682  }
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "method": "bdev_iscsi_set_options",
00:08:49.682  "params": {
00:08:49.682  "timeout_sec": 30
00:08:49.682  }
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "method": "bdev_nvme_set_options",
00:08:49.682  "params": {
00:08:49.682  "action_on_timeout": "none",
00:08:49.682  "timeout_us": 0,
00:08:49.682  "timeout_admin_us": 0,
00:08:49.682  "keep_alive_timeout_ms": 10000,
00:08:49.682  "arbitration_burst": 0,
00:08:49.682  "low_priority_weight": 0,
00:08:49.682  "medium_priority_weight": 0,
00:08:49.682  "high_priority_weight": 0,
00:08:49.682  "nvme_adminq_poll_period_us": 10000,
00:08:49.682  "nvme_ioq_poll_period_us": 0,
00:08:49.682  "io_queue_requests": 0,
00:08:49.682  "delay_cmd_submit": true,
00:08:49.682  "transport_retry_count": 4,
00:08:49.682  "bdev_retry_count": 3,
00:08:49.682  "transport_ack_timeout": 0,
00:08:49.682  "ctrlr_loss_timeout_sec": 0,
00:08:49.682  "reconnect_delay_sec": 0,
00:08:49.682  "fast_io_fail_timeout_sec": 0,
00:08:49.682  "disable_auto_failback": false,
00:08:49.682  "generate_uuids": false,
00:08:49.682  "transport_tos": 0,
00:08:49.682  "nvme_error_stat": false,
00:08:49.682  "rdma_srq_size": 0,
00:08:49.682  "io_path_stat": false,
00:08:49.682  "allow_accel_sequence": false,
00:08:49.682  "rdma_max_cq_size": 0,
00:08:49.682  "rdma_cm_event_timeout_ms": 0,
00:08:49.682  "dhchap_digests": [
00:08:49.682  "sha256",
00:08:49.682  "sha384",
00:08:49.682  "sha512"
00:08:49.682  ],
00:08:49.682  "dhchap_dhgroups": [
00:08:49.682  "null",
00:08:49.682  "ffdhe2048",
00:08:49.682  "ffdhe3072",
00:08:49.682  "ffdhe4096",
00:08:49.682  "ffdhe6144",
00:08:49.682  "ffdhe8192"
00:08:49.682  ]
00:08:49.682  }
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "method": "bdev_nvme_set_hotplug",
00:08:49.682  "params": {
00:08:49.682  "period_us": 100000,
00:08:49.682  "enable": false
00:08:49.682  }
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "method": "bdev_wait_for_examine"
00:08:49.682  }
00:08:49.682  ]
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "scsi",
00:08:49.682  "config": null
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "scheduler",
00:08:49.682  "config": [
00:08:49.682  {
00:08:49.682  "method": "framework_set_scheduler",
00:08:49.682  "params": {
00:08:49.682  "name": "static"
00:08:49.682  }
00:08:49.682  }
00:08:49.682  ]
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "vhost_scsi",
00:08:49.682  "config": []
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "vhost_blk",
00:08:49.682  "config": []
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "ublk",
00:08:49.682  "config": []
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "nbd",
00:08:49.682  "config": []
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "nvmf",
00:08:49.682  "config": [
00:08:49.682  {
00:08:49.682  "method": "nvmf_set_config",
00:08:49.682  "params": {
00:08:49.682  "discovery_filter": "match_any",
00:08:49.682  "admin_cmd_passthru": {
00:08:49.682  "identify_ctrlr": false
00:08:49.682  },
00:08:49.682  "dhchap_digests": [
00:08:49.682  "sha256",
00:08:49.682  "sha384",
00:08:49.682  "sha512"
00:08:49.682  ],
00:08:49.682  "dhchap_dhgroups": [
00:08:49.682  "null",
00:08:49.682  "ffdhe2048",
00:08:49.682  "ffdhe3072",
00:08:49.682  "ffdhe4096",
00:08:49.682  "ffdhe6144",
00:08:49.682  "ffdhe8192"
00:08:49.682  ]
00:08:49.682  }
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "method": "nvmf_set_max_subsystems",
00:08:49.682  "params": {
00:08:49.682  "max_subsystems": 1024
00:08:49.682  }
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "method": "nvmf_set_crdt",
00:08:49.682  "params": {
00:08:49.682  "crdt1": 0,
00:08:49.682  "crdt2": 0,
00:08:49.682  "crdt3": 0
00:08:49.682  }
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "method": "nvmf_create_transport",
00:08:49.682  "params": {
00:08:49.682  "trtype": "TCP",
00:08:49.682  "max_queue_depth": 128,
00:08:49.682  "max_io_qpairs_per_ctrlr": 127,
00:08:49.682  "in_capsule_data_size": 4096,
00:08:49.682  "max_io_size": 131072,
00:08:49.682  "io_unit_size": 131072,
00:08:49.682  "max_aq_depth": 128,
00:08:49.682  "num_shared_buffers": 511,
00:08:49.682  "buf_cache_size": 4294967295,
00:08:49.682  "dif_insert_or_strip": false,
00:08:49.682  "zcopy": false,
00:08:49.682  "c2h_success": true,
00:08:49.682  "sock_priority": 0,
00:08:49.682  "abort_timeout_sec": 1,
00:08:49.682  "ack_timeout": 0,
00:08:49.682  "data_wr_pool_size": 0
00:08:49.682  }
00:08:49.682  }
00:08:49.682  ]
00:08:49.682  },
00:08:49.682  {
00:08:49.682  "subsystem": "iscsi",
00:08:49.682  "config": [
00:08:49.682  {
00:08:49.683  "method": "iscsi_set_options",
00:08:49.683  "params": {
00:08:49.683  "node_base": "iqn.2016-06.io.spdk",
00:08:49.683  "max_sessions": 128,
00:08:49.683  "max_connections_per_session": 2,
00:08:49.683  "max_queue_depth": 64,
00:08:49.683  "default_time2wait": 2,
00:08:49.683  "default_time2retain": 20,
00:08:49.683  "first_burst_length": 8192,
00:08:49.683  "immediate_data": true,
00:08:49.683  "allow_duplicated_isid": false,
00:08:49.683  "error_recovery_level": 0,
00:08:49.683  "nop_timeout": 60,
00:08:49.683  "nop_in_interval": 30,
00:08:49.683  "disable_chap": false,
00:08:49.683  "require_chap": false,
00:08:49.683  "mutual_chap": false,
00:08:49.683  "chap_group": 0,
00:08:49.683  "max_large_datain_per_connection": 64,
00:08:49.683  "max_r2t_per_connection": 4,
00:08:49.683  "pdu_pool_size": 36864,
00:08:49.683  "immediate_data_pool_size": 16384,
00:08:49.683  "data_out_pool_size": 2048
00:08:49.683  }
00:08:49.683  }
00:08:49.683  ]
00:08:49.683  }
00:08:49.683  ]
00:08:49.683  }
00:08:49.683   23:50:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:08:49.683   23:50:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2897805
00:08:49.683   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2897805 ']'
00:08:49.683   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2897805
00:08:49.683    23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:08:49.683   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:49.683    23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2897805
00:08:49.941   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:49.941   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:49.941   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2897805'
00:08:49.941  killing process with pid 2897805
00:08:49.941   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2897805
00:08:49.941   23:50:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2897805
00:08:50.200   23:50:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2898033
00:08:50.200   23:50:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:08:50.200   23:50:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json
00:08:55.470   23:50:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2898033
00:08:55.470   23:50:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2898033 ']'
00:08:55.470   23:50:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2898033
00:08:55.470    23:50:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:08:55.470   23:50:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:55.471    23:50:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2898033
00:08:55.471   23:50:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:55.471   23:50:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:55.471   23:50:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2898033'
00:08:55.471  killing process with pid 2898033
00:08:55.471   23:50:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2898033
00:08:55.471   23:50:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2898033
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt
00:08:55.471  
00:08:55.471  real	0m6.279s
00:08:55.471  user	0m5.966s
00:08:55.471  sys	0m0.607s
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:08:55.471  ************************************
00:08:55.471  END TEST skip_rpc_with_json
00:08:55.471  ************************************
00:08:55.471   23:50:11 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:08:55.471   23:50:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:55.471   23:50:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:55.471   23:50:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:55.471  ************************************
00:08:55.471  START TEST skip_rpc_with_delay
00:08:55.471  ************************************
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:55.471    23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:55.471    23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:08:55.471   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:08:55.730  [2024-12-09 23:50:11.354050] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:08:55.730   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:08:55.730   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:55.730   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:55.730   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:55.730  
00:08:55.730  real	0m0.067s
00:08:55.730  user	0m0.042s
00:08:55.730  sys	0m0.024s
00:08:55.730   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:55.730   23:50:11 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:08:55.730  ************************************
00:08:55.730  END TEST skip_rpc_with_delay
00:08:55.730  ************************************
00:08:55.730    23:50:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:08:55.730   23:50:11 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:08:55.730   23:50:11 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:08:55.730   23:50:11 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:55.730   23:50:11 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:55.730   23:50:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:55.730  ************************************
00:08:55.730  START TEST exit_on_failed_rpc_init
00:08:55.730  ************************************
00:08:55.730   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:08:55.730   23:50:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2898981
00:08:55.730   23:50:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2898981
00:08:55.730   23:50:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:08:55.730   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2898981 ']'
00:08:55.730   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:55.730   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:55.730   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:55.730  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:55.730   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:55.730   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:08:55.730  [2024-12-09 23:50:11.490186] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:08:55.730  [2024-12-09 23:50:11.490228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898981 ]
00:08:55.730  [2024-12-09 23:50:11.563371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:55.989  [2024-12-09 23:50:11.603157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:55.989   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:55.989   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:08:55.989   23:50:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:55.989   23:50:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:08:55.989   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:08:55.989   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:08:55.989   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:08:55.989   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:55.989    23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:08:55.989   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:55.989    23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:08:55.989   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:55.989   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:08:55.989   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]]
00:08:55.989   23:50:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2
00:08:56.248  [2024-12-09 23:50:11.882316] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:08:56.248  [2024-12-09 23:50:11.882358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2898997 ]
00:08:56.248  [2024-12-09 23:50:11.956131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:56.248  [2024-12-09 23:50:11.996062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:56.248  [2024-12-09 23:50:11.996118] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:08:56.248  [2024-12-09 23:50:11.996127] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:08:56.248  [2024-12-09 23:50:11.996133] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:08:56.248   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:08:56.248   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:56.248   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:08:56.248   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:08:56.248   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:08:56.248   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:56.248   23:50:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:08:56.248   23:50:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2898981
00:08:56.248   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2898981 ']'
00:08:56.248   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2898981
00:08:56.248    23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:08:56.248   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:56.248    23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2898981
00:08:56.248   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:56.249   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:56.249   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2898981'
00:08:56.249  killing process with pid 2898981
00:08:56.249   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2898981
00:08:56.249   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2898981
00:08:56.817  
00:08:56.817  real	0m0.950s
00:08:56.817  user	0m1.004s
00:08:56.817  sys	0m0.400s
00:08:56.817   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:56.817   23:50:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:08:56.817  ************************************
00:08:56.817  END TEST exit_on_failed_rpc_init
00:08:56.817  ************************************
00:08:56.817   23:50:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json
00:08:56.817  
00:08:56.817  real	0m13.115s
00:08:56.817  user	0m12.342s
00:08:56.817  sys	0m1.586s
00:08:56.817   23:50:12 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:56.817   23:50:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:56.817  ************************************
00:08:56.817  END TEST skip_rpc
00:08:56.817  ************************************
00:08:56.817   23:50:12  -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:08:56.817   23:50:12  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:56.817   23:50:12  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:56.817   23:50:12  -- common/autotest_common.sh@10 -- # set +x
00:08:56.817  ************************************
00:08:56.817  START TEST rpc_client
00:08:56.817  ************************************
00:08:56.817   23:50:12 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh
00:08:56.817  * Looking for test storage...
00:08:56.817  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client
00:08:56.817    23:50:12 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:56.817     23:50:12 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version
00:08:56.817     23:50:12 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:56.817    23:50:12 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@345 -- # : 1
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:56.817     23:50:12 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:08:56.817     23:50:12 rpc_client -- scripts/common.sh@353 -- # local d=1
00:08:56.817     23:50:12 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:56.817     23:50:12 rpc_client -- scripts/common.sh@355 -- # echo 1
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:08:56.817     23:50:12 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:08:56.817     23:50:12 rpc_client -- scripts/common.sh@353 -- # local d=2
00:08:56.817     23:50:12 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:56.817     23:50:12 rpc_client -- scripts/common.sh@355 -- # echo 2
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:56.817    23:50:12 rpc_client -- scripts/common.sh@368 -- # return 0
00:08:56.817    23:50:12 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:56.817    23:50:12 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:56.817  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:56.817  		--rc genhtml_branch_coverage=1
00:08:56.817  		--rc genhtml_function_coverage=1
00:08:56.817  		--rc genhtml_legend=1
00:08:56.817  		--rc geninfo_all_blocks=1
00:08:56.817  		--rc geninfo_unexecuted_blocks=1
00:08:56.817  		
00:08:56.817  		'
00:08:56.817    23:50:12 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:56.817  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:56.817  		--rc genhtml_branch_coverage=1
00:08:56.817  		--rc genhtml_function_coverage=1
00:08:56.817  		--rc genhtml_legend=1
00:08:56.817  		--rc geninfo_all_blocks=1
00:08:56.817  		--rc geninfo_unexecuted_blocks=1
00:08:56.817  		
00:08:56.817  		'
00:08:56.817    23:50:12 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:56.817  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:56.817  		--rc genhtml_branch_coverage=1
00:08:56.817  		--rc genhtml_function_coverage=1
00:08:56.817  		--rc genhtml_legend=1
00:08:56.817  		--rc geninfo_all_blocks=1
00:08:56.817  		--rc geninfo_unexecuted_blocks=1
00:08:56.817  		
00:08:56.817  		'
00:08:56.817    23:50:12 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:56.817  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:56.817  		--rc genhtml_branch_coverage=1
00:08:56.817  		--rc genhtml_function_coverage=1
00:08:56.817  		--rc genhtml_legend=1
00:08:56.817  		--rc geninfo_all_blocks=1
00:08:56.817  		--rc geninfo_unexecuted_blocks=1
00:08:56.817  		
00:08:56.817  		'
00:08:56.817   23:50:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test
00:08:57.076  OK
00:08:57.076   23:50:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:08:57.076  
00:08:57.076  real	0m0.198s
00:08:57.076  user	0m0.114s
00:08:57.076  sys	0m0.097s
00:08:57.076   23:50:12 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:57.076   23:50:12 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:08:57.076  ************************************
00:08:57.076  END TEST rpc_client
00:08:57.076  ************************************
00:08:57.076   23:50:12  -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh
00:08:57.076   23:50:12  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:57.076   23:50:12  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:57.076   23:50:12  -- common/autotest_common.sh@10 -- # set +x
00:08:57.076  ************************************
00:08:57.076  START TEST json_config
00:08:57.076  ************************************
00:08:57.076   23:50:12 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh
00:08:57.076    23:50:12 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:08:57.076     23:50:12 json_config -- common/autotest_common.sh@1711 -- # lcov --version
00:08:57.076     23:50:12 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:08:57.076    23:50:12 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:08:57.076    23:50:12 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:57.076    23:50:12 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:57.076    23:50:12 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:57.076    23:50:12 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:08:57.076    23:50:12 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:08:57.076    23:50:12 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:08:57.076    23:50:12 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:08:57.076    23:50:12 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:08:57.076    23:50:12 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:08:57.076    23:50:12 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:08:57.076    23:50:12 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:57.076    23:50:12 json_config -- scripts/common.sh@344 -- # case "$op" in
00:08:57.076    23:50:12 json_config -- scripts/common.sh@345 -- # : 1
00:08:57.076    23:50:12 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:57.076    23:50:12 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:57.076     23:50:12 json_config -- scripts/common.sh@365 -- # decimal 1
00:08:57.076     23:50:12 json_config -- scripts/common.sh@353 -- # local d=1
00:08:57.076     23:50:12 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:57.076     23:50:12 json_config -- scripts/common.sh@355 -- # echo 1
00:08:57.076    23:50:12 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:08:57.076     23:50:12 json_config -- scripts/common.sh@366 -- # decimal 2
00:08:57.076     23:50:12 json_config -- scripts/common.sh@353 -- # local d=2
00:08:57.076     23:50:12 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:57.076     23:50:12 json_config -- scripts/common.sh@355 -- # echo 2
00:08:57.076    23:50:12 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:08:57.076    23:50:12 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:57.076    23:50:12 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:57.076    23:50:12 json_config -- scripts/common.sh@368 -- # return 0
00:08:57.076    23:50:12 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:57.076    23:50:12 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:08:57.076  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:57.076  		--rc genhtml_branch_coverage=1
00:08:57.076  		--rc genhtml_function_coverage=1
00:08:57.076  		--rc genhtml_legend=1
00:08:57.076  		--rc geninfo_all_blocks=1
00:08:57.076  		--rc geninfo_unexecuted_blocks=1
00:08:57.076  		
00:08:57.076  		'
00:08:57.076    23:50:12 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:08:57.076  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:57.076  		--rc genhtml_branch_coverage=1
00:08:57.076  		--rc genhtml_function_coverage=1
00:08:57.076  		--rc genhtml_legend=1
00:08:57.076  		--rc geninfo_all_blocks=1
00:08:57.076  		--rc geninfo_unexecuted_blocks=1
00:08:57.076  		
00:08:57.076  		'
00:08:57.076    23:50:12 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:08:57.076  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:57.076  		--rc genhtml_branch_coverage=1
00:08:57.076  		--rc genhtml_function_coverage=1
00:08:57.076  		--rc genhtml_legend=1
00:08:57.076  		--rc geninfo_all_blocks=1
00:08:57.076  		--rc geninfo_unexecuted_blocks=1
00:08:57.076  		
00:08:57.076  		'
00:08:57.076    23:50:12 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:08:57.076  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:57.076  		--rc genhtml_branch_coverage=1
00:08:57.076  		--rc genhtml_function_coverage=1
00:08:57.076  		--rc genhtml_legend=1
00:08:57.076  		--rc geninfo_all_blocks=1
00:08:57.076  		--rc geninfo_unexecuted_blocks=1
00:08:57.076  		
00:08:57.076  		'
00:08:57.076   23:50:12 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:08:57.076     23:50:12 json_config -- nvmf/common.sh@7 -- # uname -s
00:08:57.076    23:50:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:08:57.076    23:50:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:08:57.076    23:50:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:08:57.076    23:50:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:08:57.076    23:50:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:08:57.076    23:50:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:08:57.076    23:50:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:08:57.076    23:50:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:08:57.076    23:50:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:08:57.076     23:50:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:08:57.076    23:50:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:08:57.077     23:50:12 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:08:57.077     23:50:12 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:57.077     23:50:12 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:57.077     23:50:12 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:57.077      23:50:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:57.077      23:50:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:57.077      23:50:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:57.077      23:50:12 json_config -- paths/export.sh@5 -- # export PATH
00:08:57.077      23:50:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@51 -- # : 0
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:08:57.077  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:08:57.077    23:50:12 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:08:57.077   23:50:12 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh
00:08:57.077   23:50:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:08:57.077   23:50:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:08:57.077   23:50:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:08:57.077   23:50:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:08:57.077   23:50:12 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='')
00:08:57.077   23:50:12 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid
00:08:57.336   23:50:12 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock')
00:08:57.336   23:50:12 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket
00:08:57.336   23:50:12 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024')
00:08:57.336   23:50:12 json_config -- json_config/json_config.sh@33 -- # declare -A app_params
00:08:57.336   23:50:12 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json')
00:08:57.336   23:50:12 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path
00:08:57.336   23:50:12 json_config -- json_config/json_config.sh@40 -- # last_event_id=0
00:08:57.336   23:50:12 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:08:57.336   23:50:12 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init'
00:08:57.336  INFO: JSON configuration test init
00:08:57.336   23:50:12 json_config -- json_config/json_config.sh@364 -- # json_config_test_init
00:08:57.336   23:50:12 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init
00:08:57.336   23:50:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:57.336   23:50:12 json_config -- common/autotest_common.sh@10 -- # set +x
00:08:57.336   23:50:12 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target
00:08:57.336   23:50:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:57.336   23:50:12 json_config -- common/autotest_common.sh@10 -- # set +x
00:08:57.336   23:50:12 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc
00:08:57.336   23:50:12 json_config -- json_config/common.sh@9 -- # local app=target
00:08:57.336   23:50:12 json_config -- json_config/common.sh@10 -- # shift
00:08:57.336   23:50:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:08:57.336   23:50:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:08:57.336   23:50:12 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:08:57.336   23:50:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:08:57.336   23:50:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:08:57.336   23:50:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2899343
00:08:57.336   23:50:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:08:57.336  Waiting for target to run...
00:08:57.336   23:50:12 json_config -- json_config/common.sh@25 -- # waitforlisten 2899343 /var/tmp/spdk_tgt.sock
00:08:57.336   23:50:12 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc
00:08:57.336   23:50:12 json_config -- common/autotest_common.sh@835 -- # '[' -z 2899343 ']'
00:08:57.336   23:50:12 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:08:57.336   23:50:12 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:57.336   23:50:12 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:08:57.336  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:08:57.336   23:50:12 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:57.336   23:50:12 json_config -- common/autotest_common.sh@10 -- # set +x
00:08:57.336  [2024-12-09 23:50:13.000946] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:08:57.336  [2024-12-09 23:50:13.000991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899343 ]
00:08:57.595  [2024-12-09 23:50:13.286482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:57.595  [2024-12-09 23:50:13.316131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:58.162   23:50:13 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:58.162   23:50:13 json_config -- common/autotest_common.sh@868 -- # return 0
00:08:58.162   23:50:13 json_config -- json_config/common.sh@26 -- # echo ''
00:08:58.162  
00:08:58.162   23:50:13 json_config -- json_config/json_config.sh@276 -- # create_accel_config
00:08:58.162   23:50:13 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config
00:08:58.162   23:50:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:58.162   23:50:13 json_config -- common/autotest_common.sh@10 -- # set +x
00:08:58.162   23:50:13 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]]
00:08:58.162   23:50:13 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config
00:08:58.162   23:50:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:58.162   23:50:13 json_config -- common/autotest_common.sh@10 -- # set +x
00:08:58.162   23:50:13 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems
00:08:58.162   23:50:13 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config
00:08:58.162   23:50:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config
00:09:01.448   23:50:16 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types
00:09:01.448   23:50:16 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types
00:09:01.448   23:50:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:01.448   23:50:16 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:01.448   23:50:16 json_config -- json_config/json_config.sh@45 -- # local ret=0
00:09:01.448   23:50:16 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister')
00:09:01.448   23:50:16 json_config -- json_config/json_config.sh@46 -- # local enabled_types
00:09:01.448   23:50:16 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]]
00:09:01.448   23:50:16 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister")
00:09:01.448    23:50:16 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types
00:09:01.448    23:50:16 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]'
00:09:01.448    23:50:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister')
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@51 -- # local get_types
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@53 -- # local type_diff
00:09:01.448    23:50:17 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister
00:09:01.448    23:50:17 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n'
00:09:01.448    23:50:17 json_config -- json_config/json_config.sh@54 -- # sort
00:09:01.448    23:50:17 json_config -- json_config/json_config.sh@54 -- # uniq -u
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@54 -- # type_diff=
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]]
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types
00:09:01.448   23:50:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:01.448   23:50:17 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@62 -- # return 0
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]]
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]]
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]]
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]]
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config
00:09:01.448   23:50:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:01.448   23:50:17 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]]
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]]
00:09:01.448   23:50:17 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0
00:09:01.448   23:50:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0
00:09:01.707  MallocForNvmf0
00:09:01.707   23:50:17 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1
00:09:01.707   23:50:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1
00:09:01.707  MallocForNvmf1
00:09:01.965   23:50:17 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0
00:09:01.965   23:50:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0
00:09:01.965  [2024-12-09 23:50:17.747133] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:09:01.965   23:50:17 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:09:01.965   23:50:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:09:02.224   23:50:17 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0
00:09:02.224   23:50:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0
00:09:02.483   23:50:18 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1
00:09:02.483   23:50:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1
00:09:02.742   23:50:18 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420
00:09:02.742   23:50:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420
00:09:02.742  [2024-12-09 23:50:18.537530] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:09:02.742   23:50:18 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config
00:09:02.742   23:50:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:02.742   23:50:18 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:03.000   23:50:18 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target
00:09:03.000   23:50:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:03.000   23:50:18 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:03.000   23:50:18 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]]
00:09:03.000   23:50:18 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:09:03.000   23:50:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:09:03.000  MallocBdevForConfigChangeCheck
00:09:03.000   23:50:18 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init
00:09:03.000   23:50:18 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:03.000   23:50:18 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:03.259   23:50:18 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config
00:09:03.259   23:50:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:03.518   23:50:19 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...'
00:09:03.518  INFO: shutting down applications...
00:09:03.518   23:50:19 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]]
00:09:03.518   23:50:19 json_config -- json_config/json_config.sh@375 -- # json_config_clear target
00:09:03.518   23:50:19 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]]
00:09:03.518   23:50:19 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config
00:09:05.421  Calling clear_iscsi_subsystem
00:09:05.421  Calling clear_nvmf_subsystem
00:09:05.421  Calling clear_nbd_subsystem
00:09:05.421  Calling clear_ublk_subsystem
00:09:05.421  Calling clear_vhost_blk_subsystem
00:09:05.421  Calling clear_vhost_scsi_subsystem
00:09:05.421  Calling clear_bdev_subsystem
00:09:05.421   23:50:20 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py
00:09:05.421   23:50:20 json_config -- json_config/json_config.sh@350 -- # count=100
00:09:05.421   23:50:20 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']'
00:09:05.421   23:50:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:05.421   23:50:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters
00:09:05.421   23:50:20 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty
00:09:05.421   23:50:21 json_config -- json_config/json_config.sh@352 -- # break
00:09:05.421   23:50:21 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']'
00:09:05.421   23:50:21 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target
00:09:05.421   23:50:21 json_config -- json_config/common.sh@31 -- # local app=target
00:09:05.421   23:50:21 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:09:05.421   23:50:21 json_config -- json_config/common.sh@35 -- # [[ -n 2899343 ]]
00:09:05.421   23:50:21 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2899343
00:09:05.421   23:50:21 json_config -- json_config/common.sh@40 -- # (( i = 0 ))
00:09:05.421   23:50:21 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:05.421   23:50:21 json_config -- json_config/common.sh@41 -- # kill -0 2899343
00:09:05.421   23:50:21 json_config -- json_config/common.sh@45 -- # sleep 0.5
00:09:05.991   23:50:21 json_config -- json_config/common.sh@40 -- # (( i++ ))
00:09:05.991   23:50:21 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:05.991   23:50:21 json_config -- json_config/common.sh@41 -- # kill -0 2899343
00:09:05.991   23:50:21 json_config -- json_config/common.sh@42 -- # app_pid["$app"]=
00:09:05.991   23:50:21 json_config -- json_config/common.sh@43 -- # break
00:09:05.991   23:50:21 json_config -- json_config/common.sh@48 -- # [[ -n '' ]]
00:09:05.991   23:50:21 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:09:05.991  SPDK target shutdown done
00:09:05.991   23:50:21 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...'
00:09:05.991  INFO: relaunching applications...
00:09:05.991   23:50:21 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:09:05.991   23:50:21 json_config -- json_config/common.sh@9 -- # local app=target
00:09:05.991   23:50:21 json_config -- json_config/common.sh@10 -- # shift
00:09:05.991   23:50:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:09:05.991   23:50:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:09:05.991   23:50:21 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:09:05.991   23:50:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:05.991   23:50:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:05.991   23:50:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2900822
00:09:05.991   23:50:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:09:05.991  Waiting for target to run...
00:09:05.991   23:50:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:09:05.991   23:50:21 json_config -- json_config/common.sh@25 -- # waitforlisten 2900822 /var/tmp/spdk_tgt.sock
00:09:05.991   23:50:21 json_config -- common/autotest_common.sh@835 -- # '[' -z 2900822 ']'
00:09:05.991   23:50:21 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:09:05.991   23:50:21 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:05.991   23:50:21 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:09:05.991  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:09:05.991   23:50:21 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:05.991   23:50:21 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:05.991  [2024-12-09 23:50:21.705264] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:05.991  [2024-12-09 23:50:21.705314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900822 ]
00:09:06.559  [2024-12-09 23:50:22.157556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:06.559  [2024-12-09 23:50:22.212111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:09.845  [2024-12-09 23:50:25.236957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:09:09.845  [2024-12-09 23:50:25.269228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:09:10.103   23:50:25 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:10.103   23:50:25 json_config -- common/autotest_common.sh@868 -- # return 0
00:09:10.103   23:50:25 json_config -- json_config/common.sh@26 -- # echo ''
00:09:10.103  
00:09:10.103   23:50:25 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]]
00:09:10.103   23:50:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...'
00:09:10.103  INFO: Checking if target configuration is the same...
00:09:10.103   23:50:25 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:09:10.103    23:50:25 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config
00:09:10.103    23:50:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:10.103  + '[' 2 -ne 2 ']'
00:09:10.103  +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh
00:09:10.103  ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../..
00:09:10.103  + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:09:10.103  +++ basename /dev/fd/62
00:09:10.103  ++ mktemp /tmp/62.XXX
00:09:10.361  + tmp_file_1=/tmp/62.lSj
00:09:10.361  +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:09:10.361  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:09:10.361  + tmp_file_2=/tmp/spdk_tgt_config.json.A8r
00:09:10.361  + ret=0
00:09:10.361  + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:09:10.620  + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:09:10.620  + diff -u /tmp/62.lSj /tmp/spdk_tgt_config.json.A8r
00:09:10.620  + echo 'INFO: JSON config files are the same'
00:09:10.620  INFO: JSON config files are the same
00:09:10.620  + rm /tmp/62.lSj /tmp/spdk_tgt_config.json.A8r
00:09:10.620  + exit 0
00:09:10.620   23:50:26 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]]
00:09:10.620   23:50:26 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...'
00:09:10.620  INFO: changing configuration and checking if this can be detected...
00:09:10.620   23:50:26 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck
00:09:10.620   23:50:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck
00:09:10.879   23:50:26 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:09:10.879    23:50:26 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config
00:09:10.879    23:50:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:10.879  + '[' 2 -ne 2 ']'
00:09:10.879  +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh
00:09:10.879  ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../..
00:09:10.879  + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:09:10.879  +++ basename /dev/fd/62
00:09:10.879  ++ mktemp /tmp/62.XXX
00:09:10.879  + tmp_file_1=/tmp/62.Ijp
00:09:10.879  +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:09:10.879  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:09:10.879  + tmp_file_2=/tmp/spdk_tgt_config.json.2BJ
00:09:10.879  + ret=0
00:09:10.879  + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:09:11.138  + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort
00:09:11.138  + diff -u /tmp/62.Ijp /tmp/spdk_tgt_config.json.2BJ
00:09:11.138  + ret=1
00:09:11.138  + echo '=== Start of file: /tmp/62.Ijp ==='
00:09:11.138  + cat /tmp/62.Ijp
00:09:11.138  + echo '=== End of file: /tmp/62.Ijp ==='
00:09:11.138  + echo ''
00:09:11.138  + echo '=== Start of file: /tmp/spdk_tgt_config.json.2BJ ==='
00:09:11.138  + cat /tmp/spdk_tgt_config.json.2BJ
00:09:11.138  + echo '=== End of file: /tmp/spdk_tgt_config.json.2BJ ==='
00:09:11.138  + echo ''
00:09:11.138  + rm /tmp/62.Ijp /tmp/spdk_tgt_config.json.2BJ
00:09:11.138  + exit 1
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.'
00:09:11.138  INFO: configuration change detected.
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini
00:09:11.138   23:50:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:11.138   23:50:26 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@314 -- # local ret=0
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]]
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@324 -- # [[ -n 2900822 ]]
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config
00:09:11.138   23:50:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:11.138   23:50:26 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]]
00:09:11.138    23:50:26 json_config -- json_config/json_config.sh@200 -- # uname -s
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]]
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]]
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config
00:09:11.138   23:50:26 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:11.138   23:50:26 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:11.138   23:50:26 json_config -- json_config/json_config.sh@330 -- # killprocess 2900822
00:09:11.138   23:50:26 json_config -- common/autotest_common.sh@954 -- # '[' -z 2900822 ']'
00:09:11.138   23:50:26 json_config -- common/autotest_common.sh@958 -- # kill -0 2900822
00:09:11.138    23:50:26 json_config -- common/autotest_common.sh@959 -- # uname
00:09:11.138   23:50:26 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:11.138    23:50:26 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2900822
00:09:11.397   23:50:27 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:11.397   23:50:27 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:11.397   23:50:27 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2900822'
00:09:11.397  killing process with pid 2900822
00:09:11.397   23:50:27 json_config -- common/autotest_common.sh@973 -- # kill 2900822
00:09:11.397   23:50:27 json_config -- common/autotest_common.sh@978 -- # wait 2900822
00:09:12.775   23:50:28 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json
00:09:12.775   23:50:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini
00:09:12.775   23:50:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:12.775   23:50:28 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:12.775   23:50:28 json_config -- json_config/json_config.sh@335 -- # return 0
00:09:12.775   23:50:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success'
00:09:12.775  INFO: Success
00:09:12.775  
00:09:12.775  real	0m15.774s
00:09:12.775  user	0m16.377s
00:09:12.775  sys	0m2.563s
00:09:12.775   23:50:28 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:12.775   23:50:28 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:12.775  ************************************
00:09:12.775  END TEST json_config
00:09:12.775  ************************************
00:09:12.775   23:50:28  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:09:12.775   23:50:28  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:12.775   23:50:28  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:12.775   23:50:28  -- common/autotest_common.sh@10 -- # set +x
00:09:12.775  ************************************
00:09:12.775  START TEST json_config_extra_key
00:09:12.775  ************************************
00:09:12.775   23:50:28 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh
00:09:13.035    23:50:28 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:13.035     23:50:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version
00:09:13.035     23:50:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:13.035    23:50:28 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:13.035     23:50:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:09:13.035     23:50:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:09:13.035     23:50:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:13.035     23:50:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:09:13.035     23:50:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:09:13.035     23:50:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:09:13.035     23:50:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:13.035     23:50:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:13.035    23:50:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:09:13.035    23:50:28 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:13.035    23:50:28 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:13.035  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:13.035  		--rc genhtml_branch_coverage=1
00:09:13.035  		--rc genhtml_function_coverage=1
00:09:13.035  		--rc genhtml_legend=1
00:09:13.035  		--rc geninfo_all_blocks=1
00:09:13.035  		--rc geninfo_unexecuted_blocks=1
00:09:13.035  		
00:09:13.035  		'
00:09:13.035    23:50:28 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:13.035  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:13.035  		--rc genhtml_branch_coverage=1
00:09:13.035  		--rc genhtml_function_coverage=1
00:09:13.035  		--rc genhtml_legend=1
00:09:13.035  		--rc geninfo_all_blocks=1
00:09:13.035  		--rc geninfo_unexecuted_blocks=1
00:09:13.035  		
00:09:13.035  		'
00:09:13.035    23:50:28 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:13.035  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:13.035  		--rc genhtml_branch_coverage=1
00:09:13.035  		--rc genhtml_function_coverage=1
00:09:13.035  		--rc genhtml_legend=1
00:09:13.035  		--rc geninfo_all_blocks=1
00:09:13.035  		--rc geninfo_unexecuted_blocks=1
00:09:13.035  		
00:09:13.035  		'
00:09:13.035    23:50:28 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:13.035  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:13.035  		--rc genhtml_branch_coverage=1
00:09:13.035  		--rc genhtml_function_coverage=1
00:09:13.035  		--rc genhtml_legend=1
00:09:13.035  		--rc geninfo_all_blocks=1
00:09:13.035  		--rc geninfo_unexecuted_blocks=1
00:09:13.035  		
00:09:13.035  		'
00:09:13.035   23:50:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:09:13.035     23:50:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:13.035     23:50:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:09:13.035     23:50:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:09:13.035     23:50:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:13.035     23:50:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:13.035     23:50:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:13.035      23:50:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:13.035      23:50:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:13.035      23:50:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:13.035      23:50:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:09:13.035      23:50:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:13.035    23:50:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:13.035  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:13.036    23:50:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:13.036    23:50:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:13.036    23:50:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:13.036   23:50:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh
00:09:13.036   23:50:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:09:13.036   23:50:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:09:13.036   23:50:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:09:13.036   23:50:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:09:13.036   23:50:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:09:13.036   23:50:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:09:13.036   23:50:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json')
00:09:13.036   23:50:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:09:13.036   23:50:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:09:13.036   23:50:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:09:13.036  INFO: launching applications...
00:09:13.036   23:50:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json
00:09:13.036   23:50:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:09:13.036   23:50:28 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:09:13.036   23:50:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:09:13.036   23:50:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:09:13.036   23:50:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:09:13.036   23:50:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:13.036   23:50:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:13.036   23:50:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2902274
00:09:13.036   23:50:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:09:13.036  Waiting for target to run...
00:09:13.036   23:50:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2902274 /var/tmp/spdk_tgt.sock
00:09:13.036   23:50:28 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2902274 ']'
00:09:13.036   23:50:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json
00:09:13.036   23:50:28 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:09:13.036   23:50:28 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:13.036   23:50:28 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:09:13.036  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:09:13.036   23:50:28 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:13.036   23:50:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:09:13.036  [2024-12-09 23:50:28.844127] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:13.036  [2024-12-09 23:50:28.844187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902274 ]
00:09:13.603  [2024-12-09 23:50:29.288059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:13.603  [2024-12-09 23:50:29.343242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:13.862   23:50:29 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:13.862   23:50:29 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:09:13.862   23:50:29 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:09:13.862  
00:09:13.862   23:50:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:09:13.862  INFO: shutting down applications...
00:09:13.862   23:50:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:09:13.862   23:50:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:09:13.862   23:50:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:09:13.862   23:50:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2902274 ]]
00:09:13.862   23:50:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2902274
00:09:13.862   23:50:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:09:13.862   23:50:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:13.862   23:50:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2902274
00:09:13.862   23:50:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:14.430   23:50:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:14.430   23:50:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:14.430   23:50:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2902274
00:09:14.430   23:50:30 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:09:14.430   23:50:30 json_config_extra_key -- json_config/common.sh@43 -- # break
00:09:14.430   23:50:30 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:09:14.430   23:50:30 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:09:14.430  SPDK target shutdown done
00:09:14.430   23:50:30 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:09:14.430  Success
00:09:14.430  
00:09:14.430  real	0m1.584s
00:09:14.430  user	0m1.224s
00:09:14.430  sys	0m0.560s
00:09:14.430   23:50:30 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:14.430   23:50:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:09:14.430  ************************************
00:09:14.430  END TEST json_config_extra_key
00:09:14.430  ************************************
00:09:14.430   23:50:30  -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:09:14.430   23:50:30  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:14.430   23:50:30  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:14.430   23:50:30  -- common/autotest_common.sh@10 -- # set +x
00:09:14.430  ************************************
00:09:14.430  START TEST alias_rpc
00:09:14.430  ************************************
00:09:14.430   23:50:30 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:09:14.690  * Looking for test storage...
00:09:14.690  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc
00:09:14.690    23:50:30 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:14.690     23:50:30 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:09:14.690     23:50:30 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:14.690    23:50:30 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@345 -- # : 1
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:14.690     23:50:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:09:14.690     23:50:30 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:09:14.690     23:50:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:14.690     23:50:30 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:09:14.690     23:50:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:09:14.690     23:50:30 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:09:14.690     23:50:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:14.690     23:50:30 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:14.690    23:50:30 alias_rpc -- scripts/common.sh@368 -- # return 0
00:09:14.690    23:50:30 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:14.690    23:50:30 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:14.690  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.690  		--rc genhtml_branch_coverage=1
00:09:14.690  		--rc genhtml_function_coverage=1
00:09:14.690  		--rc genhtml_legend=1
00:09:14.690  		--rc geninfo_all_blocks=1
00:09:14.690  		--rc geninfo_unexecuted_blocks=1
00:09:14.690  		
00:09:14.690  		'
00:09:14.690    23:50:30 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:14.690  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.690  		--rc genhtml_branch_coverage=1
00:09:14.690  		--rc genhtml_function_coverage=1
00:09:14.690  		--rc genhtml_legend=1
00:09:14.690  		--rc geninfo_all_blocks=1
00:09:14.690  		--rc geninfo_unexecuted_blocks=1
00:09:14.690  		
00:09:14.690  		'
00:09:14.690    23:50:30 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:14.690  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.690  		--rc genhtml_branch_coverage=1
00:09:14.690  		--rc genhtml_function_coverage=1
00:09:14.690  		--rc genhtml_legend=1
00:09:14.690  		--rc geninfo_all_blocks=1
00:09:14.690  		--rc geninfo_unexecuted_blocks=1
00:09:14.690  		
00:09:14.690  		'
00:09:14.690    23:50:30 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:14.690  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.690  		--rc genhtml_branch_coverage=1
00:09:14.690  		--rc genhtml_function_coverage=1
00:09:14.690  		--rc genhtml_legend=1
00:09:14.690  		--rc geninfo_all_blocks=1
00:09:14.690  		--rc geninfo_unexecuted_blocks=1
00:09:14.690  		
00:09:14.690  		'
00:09:14.690   23:50:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:09:14.690   23:50:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2902562
00:09:14.690   23:50:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2902562
00:09:14.690   23:50:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:09:14.690   23:50:30 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2902562 ']'
00:09:14.690   23:50:30 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:14.690   23:50:30 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:14.690   23:50:30 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:14.690  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:14.690   23:50:30 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:14.690   23:50:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:14.690  [2024-12-09 23:50:30.486175] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:14.690  [2024-12-09 23:50:30.486224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902562 ]
00:09:14.949  [2024-12-09 23:50:30.562070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:14.949  [2024-12-09 23:50:30.602886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:15.207   23:50:30 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:15.207   23:50:30 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:15.207   23:50:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i
00:09:15.207   23:50:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2902562
00:09:15.207   23:50:31 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2902562 ']'
00:09:15.207   23:50:31 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2902562
00:09:15.207    23:50:31 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:09:15.207   23:50:31 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:15.207    23:50:31 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2902562
00:09:15.466   23:50:31 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:15.466   23:50:31 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:15.466   23:50:31 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2902562'
00:09:15.466  killing process with pid 2902562
00:09:15.466   23:50:31 alias_rpc -- common/autotest_common.sh@973 -- # kill 2902562
00:09:15.466   23:50:31 alias_rpc -- common/autotest_common.sh@978 -- # wait 2902562
00:09:15.725  
00:09:15.725  real	0m1.114s
00:09:15.725  user	0m1.152s
00:09:15.725  sys	0m0.390s
00:09:15.725   23:50:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:15.725   23:50:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:15.725  ************************************
00:09:15.725  END TEST alias_rpc
00:09:15.725  ************************************
00:09:15.725   23:50:31  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:09:15.725   23:50:31  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh
00:09:15.725   23:50:31  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:15.725   23:50:31  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:15.725   23:50:31  -- common/autotest_common.sh@10 -- # set +x
00:09:15.725  ************************************
00:09:15.725  START TEST spdkcli_tcp
00:09:15.725  ************************************
00:09:15.725   23:50:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh
00:09:15.725  * Looking for test storage...
00:09:15.725  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli
00:09:15.725    23:50:31 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:15.725     23:50:31 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:09:15.725     23:50:31 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:15.985    23:50:31 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:15.985     23:50:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:09:15.985     23:50:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:09:15.985     23:50:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:15.985     23:50:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:09:15.985     23:50:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:09:15.985     23:50:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:09:15.985     23:50:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:15.985     23:50:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:15.985    23:50:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:09:15.985    23:50:31 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:15.985    23:50:31 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:15.985  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:15.985  		--rc genhtml_branch_coverage=1
00:09:15.985  		--rc genhtml_function_coverage=1
00:09:15.985  		--rc genhtml_legend=1
00:09:15.985  		--rc geninfo_all_blocks=1
00:09:15.985  		--rc geninfo_unexecuted_blocks=1
00:09:15.985  		
00:09:15.985  		'
00:09:15.985    23:50:31 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:15.985  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:15.985  		--rc genhtml_branch_coverage=1
00:09:15.985  		--rc genhtml_function_coverage=1
00:09:15.985  		--rc genhtml_legend=1
00:09:15.985  		--rc geninfo_all_blocks=1
00:09:15.985  		--rc geninfo_unexecuted_blocks=1
00:09:15.985  		
00:09:15.985  		'
00:09:15.985    23:50:31 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:15.985  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:15.985  		--rc genhtml_branch_coverage=1
00:09:15.985  		--rc genhtml_function_coverage=1
00:09:15.985  		--rc genhtml_legend=1
00:09:15.985  		--rc geninfo_all_blocks=1
00:09:15.985  		--rc geninfo_unexecuted_blocks=1
00:09:15.985  		
00:09:15.985  		'
00:09:15.985    23:50:31 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:15.985  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:15.985  		--rc genhtml_branch_coverage=1
00:09:15.985  		--rc genhtml_function_coverage=1
00:09:15.985  		--rc genhtml_legend=1
00:09:15.985  		--rc geninfo_all_blocks=1
00:09:15.985  		--rc geninfo_unexecuted_blocks=1
00:09:15.985  		
00:09:15.985  		'
00:09:15.985   23:50:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh
00:09:15.985    23:50:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:09:15.985    23:50:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py
00:09:15.985   23:50:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:09:15.985   23:50:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:09:15.985   23:50:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:09:15.985   23:50:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:09:15.985   23:50:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:15.985   23:50:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:15.985   23:50:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2902843
00:09:15.985   23:50:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2902843
00:09:15.985   23:50:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:09:15.985   23:50:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2902843 ']'
00:09:15.985   23:50:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:15.985   23:50:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:15.985   23:50:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:15.985  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:15.985   23:50:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:15.985   23:50:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:15.985  [2024-12-09 23:50:31.678648] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:15.985  [2024-12-09 23:50:31.678693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902843 ]
00:09:15.985  [2024-12-09 23:50:31.749647] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:15.985  [2024-12-09 23:50:31.789145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:15.985  [2024-12-09 23:50:31.789146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:16.244   23:50:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:16.244   23:50:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:09:16.244   23:50:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2902858
00:09:16.244   23:50:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:09:16.244   23:50:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:09:16.503  [
00:09:16.503    "bdev_malloc_delete",
00:09:16.503    "bdev_malloc_create",
00:09:16.503    "bdev_null_resize",
00:09:16.503    "bdev_null_delete",
00:09:16.503    "bdev_null_create",
00:09:16.503    "bdev_nvme_cuse_unregister",
00:09:16.503    "bdev_nvme_cuse_register",
00:09:16.503    "bdev_opal_new_user",
00:09:16.503    "bdev_opal_set_lock_state",
00:09:16.503    "bdev_opal_delete",
00:09:16.503    "bdev_opal_get_info",
00:09:16.503    "bdev_opal_create",
00:09:16.503    "bdev_nvme_opal_revert",
00:09:16.503    "bdev_nvme_opal_init",
00:09:16.503    "bdev_nvme_send_cmd",
00:09:16.503    "bdev_nvme_set_keys",
00:09:16.503    "bdev_nvme_get_path_iostat",
00:09:16.503    "bdev_nvme_get_mdns_discovery_info",
00:09:16.503    "bdev_nvme_stop_mdns_discovery",
00:09:16.503    "bdev_nvme_start_mdns_discovery",
00:09:16.503    "bdev_nvme_set_multipath_policy",
00:09:16.503    "bdev_nvme_set_preferred_path",
00:09:16.503    "bdev_nvme_get_io_paths",
00:09:16.503    "bdev_nvme_remove_error_injection",
00:09:16.503    "bdev_nvme_add_error_injection",
00:09:16.503    "bdev_nvme_get_discovery_info",
00:09:16.503    "bdev_nvme_stop_discovery",
00:09:16.503    "bdev_nvme_start_discovery",
00:09:16.503    "bdev_nvme_get_controller_health_info",
00:09:16.503    "bdev_nvme_disable_controller",
00:09:16.503    "bdev_nvme_enable_controller",
00:09:16.503    "bdev_nvme_reset_controller",
00:09:16.503    "bdev_nvme_get_transport_statistics",
00:09:16.503    "bdev_nvme_apply_firmware",
00:09:16.503    "bdev_nvme_detach_controller",
00:09:16.503    "bdev_nvme_get_controllers",
00:09:16.503    "bdev_nvme_attach_controller",
00:09:16.503    "bdev_nvme_set_hotplug",
00:09:16.503    "bdev_nvme_set_options",
00:09:16.503    "bdev_passthru_delete",
00:09:16.503    "bdev_passthru_create",
00:09:16.504    "bdev_lvol_set_parent_bdev",
00:09:16.504    "bdev_lvol_set_parent",
00:09:16.504    "bdev_lvol_check_shallow_copy",
00:09:16.504    "bdev_lvol_start_shallow_copy",
00:09:16.504    "bdev_lvol_grow_lvstore",
00:09:16.504    "bdev_lvol_get_lvols",
00:09:16.504    "bdev_lvol_get_lvstores",
00:09:16.504    "bdev_lvol_delete",
00:09:16.504    "bdev_lvol_set_read_only",
00:09:16.504    "bdev_lvol_resize",
00:09:16.504    "bdev_lvol_decouple_parent",
00:09:16.504    "bdev_lvol_inflate",
00:09:16.504    "bdev_lvol_rename",
00:09:16.504    "bdev_lvol_clone_bdev",
00:09:16.504    "bdev_lvol_clone",
00:09:16.504    "bdev_lvol_snapshot",
00:09:16.504    "bdev_lvol_create",
00:09:16.504    "bdev_lvol_delete_lvstore",
00:09:16.504    "bdev_lvol_rename_lvstore",
00:09:16.504    "bdev_lvol_create_lvstore",
00:09:16.504    "bdev_raid_set_options",
00:09:16.504    "bdev_raid_remove_base_bdev",
00:09:16.504    "bdev_raid_add_base_bdev",
00:09:16.504    "bdev_raid_delete",
00:09:16.504    "bdev_raid_create",
00:09:16.504    "bdev_raid_get_bdevs",
00:09:16.504    "bdev_error_inject_error",
00:09:16.504    "bdev_error_delete",
00:09:16.504    "bdev_error_create",
00:09:16.504    "bdev_split_delete",
00:09:16.504    "bdev_split_create",
00:09:16.504    "bdev_delay_delete",
00:09:16.504    "bdev_delay_create",
00:09:16.504    "bdev_delay_update_latency",
00:09:16.504    "bdev_zone_block_delete",
00:09:16.504    "bdev_zone_block_create",
00:09:16.504    "blobfs_create",
00:09:16.504    "blobfs_detect",
00:09:16.504    "blobfs_set_cache_size",
00:09:16.504    "bdev_aio_delete",
00:09:16.504    "bdev_aio_rescan",
00:09:16.504    "bdev_aio_create",
00:09:16.504    "bdev_ftl_set_property",
00:09:16.504    "bdev_ftl_get_properties",
00:09:16.504    "bdev_ftl_get_stats",
00:09:16.504    "bdev_ftl_unmap",
00:09:16.504    "bdev_ftl_unload",
00:09:16.504    "bdev_ftl_delete",
00:09:16.504    "bdev_ftl_load",
00:09:16.504    "bdev_ftl_create",
00:09:16.504    "bdev_virtio_attach_controller",
00:09:16.504    "bdev_virtio_scsi_get_devices",
00:09:16.504    "bdev_virtio_detach_controller",
00:09:16.504    "bdev_virtio_blk_set_hotplug",
00:09:16.504    "bdev_iscsi_delete",
00:09:16.504    "bdev_iscsi_create",
00:09:16.504    "bdev_iscsi_set_options",
00:09:16.504    "accel_error_inject_error",
00:09:16.504    "ioat_scan_accel_module",
00:09:16.504    "dsa_scan_accel_module",
00:09:16.504    "iaa_scan_accel_module",
00:09:16.504    "vfu_virtio_create_fs_endpoint",
00:09:16.504    "vfu_virtio_create_scsi_endpoint",
00:09:16.504    "vfu_virtio_scsi_remove_target",
00:09:16.504    "vfu_virtio_scsi_add_target",
00:09:16.504    "vfu_virtio_create_blk_endpoint",
00:09:16.504    "vfu_virtio_delete_endpoint",
00:09:16.504    "keyring_file_remove_key",
00:09:16.504    "keyring_file_add_key",
00:09:16.504    "keyring_linux_set_options",
00:09:16.504    "fsdev_aio_delete",
00:09:16.504    "fsdev_aio_create",
00:09:16.504    "iscsi_get_histogram",
00:09:16.504    "iscsi_enable_histogram",
00:09:16.504    "iscsi_set_options",
00:09:16.504    "iscsi_get_auth_groups",
00:09:16.504    "iscsi_auth_group_remove_secret",
00:09:16.504    "iscsi_auth_group_add_secret",
00:09:16.504    "iscsi_delete_auth_group",
00:09:16.504    "iscsi_create_auth_group",
00:09:16.504    "iscsi_set_discovery_auth",
00:09:16.504    "iscsi_get_options",
00:09:16.504    "iscsi_target_node_request_logout",
00:09:16.504    "iscsi_target_node_set_redirect",
00:09:16.504    "iscsi_target_node_set_auth",
00:09:16.504    "iscsi_target_node_add_lun",
00:09:16.504    "iscsi_get_stats",
00:09:16.504    "iscsi_get_connections",
00:09:16.504    "iscsi_portal_group_set_auth",
00:09:16.504    "iscsi_start_portal_group",
00:09:16.504    "iscsi_delete_portal_group",
00:09:16.504    "iscsi_create_portal_group",
00:09:16.504    "iscsi_get_portal_groups",
00:09:16.504    "iscsi_delete_target_node",
00:09:16.504    "iscsi_target_node_remove_pg_ig_maps",
00:09:16.504    "iscsi_target_node_add_pg_ig_maps",
00:09:16.504    "iscsi_create_target_node",
00:09:16.504    "iscsi_get_target_nodes",
00:09:16.504    "iscsi_delete_initiator_group",
00:09:16.504    "iscsi_initiator_group_remove_initiators",
00:09:16.504    "iscsi_initiator_group_add_initiators",
00:09:16.504    "iscsi_create_initiator_group",
00:09:16.504    "iscsi_get_initiator_groups",
00:09:16.504    "nvmf_set_crdt",
00:09:16.504    "nvmf_set_config",
00:09:16.504    "nvmf_set_max_subsystems",
00:09:16.504    "nvmf_stop_mdns_prr",
00:09:16.504    "nvmf_publish_mdns_prr",
00:09:16.504    "nvmf_subsystem_get_listeners",
00:09:16.504    "nvmf_subsystem_get_qpairs",
00:09:16.504    "nvmf_subsystem_get_controllers",
00:09:16.504    "nvmf_get_stats",
00:09:16.504    "nvmf_get_transports",
00:09:16.504    "nvmf_create_transport",
00:09:16.504    "nvmf_get_targets",
00:09:16.504    "nvmf_delete_target",
00:09:16.504    "nvmf_create_target",
00:09:16.504    "nvmf_subsystem_allow_any_host",
00:09:16.504    "nvmf_subsystem_set_keys",
00:09:16.504    "nvmf_subsystem_remove_host",
00:09:16.504    "nvmf_subsystem_add_host",
00:09:16.504    "nvmf_ns_remove_host",
00:09:16.504    "nvmf_ns_add_host",
00:09:16.504    "nvmf_subsystem_remove_ns",
00:09:16.504    "nvmf_subsystem_set_ns_ana_group",
00:09:16.504    "nvmf_subsystem_add_ns",
00:09:16.504    "nvmf_subsystem_listener_set_ana_state",
00:09:16.504    "nvmf_discovery_get_referrals",
00:09:16.504    "nvmf_discovery_remove_referral",
00:09:16.504    "nvmf_discovery_add_referral",
00:09:16.504    "nvmf_subsystem_remove_listener",
00:09:16.504    "nvmf_subsystem_add_listener",
00:09:16.504    "nvmf_delete_subsystem",
00:09:16.504    "nvmf_create_subsystem",
00:09:16.504    "nvmf_get_subsystems",
00:09:16.504    "env_dpdk_get_mem_stats",
00:09:16.504    "nbd_get_disks",
00:09:16.504    "nbd_stop_disk",
00:09:16.504    "nbd_start_disk",
00:09:16.504    "ublk_recover_disk",
00:09:16.504    "ublk_get_disks",
00:09:16.504    "ublk_stop_disk",
00:09:16.504    "ublk_start_disk",
00:09:16.504    "ublk_destroy_target",
00:09:16.504    "ublk_create_target",
00:09:16.504    "virtio_blk_create_transport",
00:09:16.504    "virtio_blk_get_transports",
00:09:16.504    "vhost_controller_set_coalescing",
00:09:16.504    "vhost_get_controllers",
00:09:16.504    "vhost_delete_controller",
00:09:16.504    "vhost_create_blk_controller",
00:09:16.504    "vhost_scsi_controller_remove_target",
00:09:16.504    "vhost_scsi_controller_add_target",
00:09:16.504    "vhost_start_scsi_controller",
00:09:16.504    "vhost_create_scsi_controller",
00:09:16.504    "thread_set_cpumask",
00:09:16.504    "scheduler_set_options",
00:09:16.504    "framework_get_governor",
00:09:16.504    "framework_get_scheduler",
00:09:16.504    "framework_set_scheduler",
00:09:16.504    "framework_get_reactors",
00:09:16.504    "thread_get_io_channels",
00:09:16.504    "thread_get_pollers",
00:09:16.504    "thread_get_stats",
00:09:16.504    "framework_monitor_context_switch",
00:09:16.504    "spdk_kill_instance",
00:09:16.504    "log_enable_timestamps",
00:09:16.504    "log_get_flags",
00:09:16.504    "log_clear_flag",
00:09:16.504    "log_set_flag",
00:09:16.504    "log_get_level",
00:09:16.504    "log_set_level",
00:09:16.504    "log_get_print_level",
00:09:16.504    "log_set_print_level",
00:09:16.504    "framework_enable_cpumask_locks",
00:09:16.504    "framework_disable_cpumask_locks",
00:09:16.504    "framework_wait_init",
00:09:16.504    "framework_start_init",
00:09:16.504    "scsi_get_devices",
00:09:16.504    "bdev_get_histogram",
00:09:16.504    "bdev_enable_histogram",
00:09:16.504    "bdev_set_qos_limit",
00:09:16.504    "bdev_set_qd_sampling_period",
00:09:16.504    "bdev_get_bdevs",
00:09:16.504    "bdev_reset_iostat",
00:09:16.504    "bdev_get_iostat",
00:09:16.504    "bdev_examine",
00:09:16.504    "bdev_wait_for_examine",
00:09:16.504    "bdev_set_options",
00:09:16.504    "accel_get_stats",
00:09:16.504    "accel_set_options",
00:09:16.504    "accel_set_driver",
00:09:16.504    "accel_crypto_key_destroy",
00:09:16.504    "accel_crypto_keys_get",
00:09:16.504    "accel_crypto_key_create",
00:09:16.504    "accel_assign_opc",
00:09:16.504    "accel_get_module_info",
00:09:16.504    "accel_get_opc_assignments",
00:09:16.504    "vmd_rescan",
00:09:16.504    "vmd_remove_device",
00:09:16.504    "vmd_enable",
00:09:16.504    "sock_get_default_impl",
00:09:16.504    "sock_set_default_impl",
00:09:16.504    "sock_impl_set_options",
00:09:16.504    "sock_impl_get_options",
00:09:16.504    "iobuf_get_stats",
00:09:16.504    "iobuf_set_options",
00:09:16.504    "keyring_get_keys",
00:09:16.504    "vfu_tgt_set_base_path",
00:09:16.504    "framework_get_pci_devices",
00:09:16.504    "framework_get_config",
00:09:16.504    "framework_get_subsystems",
00:09:16.504    "fsdev_set_opts",
00:09:16.504    "fsdev_get_opts",
00:09:16.504    "trace_get_info",
00:09:16.504    "trace_get_tpoint_group_mask",
00:09:16.504    "trace_disable_tpoint_group",
00:09:16.504    "trace_enable_tpoint_group",
00:09:16.504    "trace_clear_tpoint_mask",
00:09:16.504    "trace_set_tpoint_mask",
00:09:16.504    "notify_get_notifications",
00:09:16.504    "notify_get_types",
00:09:16.504    "spdk_get_version",
00:09:16.504    "rpc_get_methods"
00:09:16.504  ]
00:09:16.504   23:50:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:09:16.504   23:50:32 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:16.504   23:50:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:16.504   23:50:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:09:16.504   23:50:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2902843
00:09:16.504   23:50:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2902843 ']'
00:09:16.504   23:50:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2902843
00:09:16.504    23:50:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:09:16.504   23:50:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:16.504    23:50:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2902843
00:09:16.504   23:50:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:16.504   23:50:32 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:16.505   23:50:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2902843'
00:09:16.505  killing process with pid 2902843
00:09:16.505   23:50:32 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2902843
00:09:16.505   23:50:32 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2902843
00:09:16.764  
00:09:16.764  real	0m1.149s
00:09:16.764  user	0m1.965s
00:09:16.764  sys	0m0.416s
00:09:16.764   23:50:32 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:16.764   23:50:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:16.764  ************************************
00:09:16.764  END TEST spdkcli_tcp
00:09:16.764  ************************************
00:09:17.023   23:50:32  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:09:17.023   23:50:32  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:17.023   23:50:32  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:17.023   23:50:32  -- common/autotest_common.sh@10 -- # set +x
00:09:17.023  ************************************
00:09:17.023  START TEST dpdk_mem_utility
00:09:17.023  ************************************
00:09:17.023   23:50:32 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:09:17.023  * Looking for test storage...
00:09:17.023  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility
00:09:17.023    23:50:32 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:17.023     23:50:32 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version
00:09:17.023     23:50:32 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:17.023    23:50:32 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:17.023     23:50:32 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:09:17.023     23:50:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:09:17.023     23:50:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:17.023     23:50:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:09:17.023     23:50:32 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:09:17.023     23:50:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:09:17.023     23:50:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:17.023     23:50:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:17.023    23:50:32 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:09:17.023    23:50:32 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:17.023    23:50:32 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:17.023  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:17.023  		--rc genhtml_branch_coverage=1
00:09:17.023  		--rc genhtml_function_coverage=1
00:09:17.023  		--rc genhtml_legend=1
00:09:17.023  		--rc geninfo_all_blocks=1
00:09:17.023  		--rc geninfo_unexecuted_blocks=1
00:09:17.023  		
00:09:17.023  		'
00:09:17.023    23:50:32 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:17.023  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:17.023  		--rc genhtml_branch_coverage=1
00:09:17.023  		--rc genhtml_function_coverage=1
00:09:17.023  		--rc genhtml_legend=1
00:09:17.023  		--rc geninfo_all_blocks=1
00:09:17.024  		--rc geninfo_unexecuted_blocks=1
00:09:17.024  		
00:09:17.024  		'
00:09:17.024    23:50:32 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:17.024  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:17.024  		--rc genhtml_branch_coverage=1
00:09:17.024  		--rc genhtml_function_coverage=1
00:09:17.024  		--rc genhtml_legend=1
00:09:17.024  		--rc geninfo_all_blocks=1
00:09:17.024  		--rc geninfo_unexecuted_blocks=1
00:09:17.024  		
00:09:17.024  		'
00:09:17.024    23:50:32 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:17.024  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:17.024  		--rc genhtml_branch_coverage=1
00:09:17.024  		--rc genhtml_function_coverage=1
00:09:17.024  		--rc genhtml_legend=1
00:09:17.024  		--rc geninfo_all_blocks=1
00:09:17.024  		--rc geninfo_unexecuted_blocks=1
00:09:17.024  		
00:09:17.024  		'
00:09:17.024   23:50:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:09:17.024   23:50:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2903141
00:09:17.024   23:50:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2903141
00:09:17.024   23:50:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:09:17.024   23:50:32 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2903141 ']'
00:09:17.024   23:50:32 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:17.024   23:50:32 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:17.024   23:50:32 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:17.024  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:17.024   23:50:32 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:17.024   23:50:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:17.283  [2024-12-09 23:50:32.895825] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:17.283  [2024-12-09 23:50:32.895873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903141 ]
00:09:17.283  [2024-12-09 23:50:32.969937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:17.283  [2024-12-09 23:50:33.010200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:17.542   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:17.542   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:09:17.542   23:50:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:09:17.542   23:50:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:09:17.542   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:17.542   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:17.542  {
00:09:17.542  "filename": "/tmp/spdk_mem_dump.txt"
00:09:17.542  }
00:09:17.542   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:17.542   23:50:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py
00:09:17.542  DPDK memory size 818.000000 MiB in 1 heap(s)
00:09:17.542  1 heaps totaling size 818.000000 MiB
00:09:17.542    size:  818.000000 MiB heap id: 0
00:09:17.542  end heaps----------
00:09:17.542  9 mempools totaling size 603.782043 MiB
00:09:17.542    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:09:17.542    size:  158.602051 MiB name: PDU_data_out_Pool
00:09:17.542    size:  100.555481 MiB name: bdev_io_2903141
00:09:17.542    size:   50.003479 MiB name: msgpool_2903141
00:09:17.542    size:   36.509338 MiB name: fsdev_io_2903141
00:09:17.542    size:   21.763794 MiB name: PDU_Pool
00:09:17.542    size:   19.513306 MiB name: SCSI_TASK_Pool
00:09:17.542    size:    4.133484 MiB name: evtpool_2903141
00:09:17.542    size:    0.026123 MiB name: Session_Pool
00:09:17.542  end mempools-------
00:09:17.542  6 memzones totaling size 4.142822 MiB
00:09:17.542    size:    1.000366 MiB name: RG_ring_0_2903141
00:09:17.542    size:    1.000366 MiB name: RG_ring_1_2903141
00:09:17.542    size:    1.000366 MiB name: RG_ring_4_2903141
00:09:17.542    size:    1.000366 MiB name: RG_ring_5_2903141
00:09:17.542    size:    0.125366 MiB name: RG_ring_2_2903141
00:09:17.542    size:    0.015991 MiB name: RG_ring_3_2903141
00:09:17.542  end memzones-------
00:09:17.542   23:50:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0
00:09:17.542  heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15
00:09:17.542    list of free elements. size: 10.852478 MiB
00:09:17.542      element at address: 0x200019200000 with size:    0.999878 MiB
00:09:17.542      element at address: 0x200019400000 with size:    0.999878 MiB
00:09:17.542      element at address: 0x200000400000 with size:    0.998535 MiB
00:09:17.542      element at address: 0x200032000000 with size:    0.994446 MiB
00:09:17.542      element at address: 0x200006400000 with size:    0.959839 MiB
00:09:17.542      element at address: 0x200012c00000 with size:    0.944275 MiB
00:09:17.542      element at address: 0x200019600000 with size:    0.936584 MiB
00:09:17.542      element at address: 0x200000200000 with size:    0.717346 MiB
00:09:17.542      element at address: 0x20001ae00000 with size:    0.582886 MiB
00:09:17.542      element at address: 0x200000c00000 with size:    0.495422 MiB
00:09:17.542      element at address: 0x20000a600000 with size:    0.490723 MiB
00:09:17.542      element at address: 0x200019800000 with size:    0.485657 MiB
00:09:17.543      element at address: 0x200003e00000 with size:    0.481934 MiB
00:09:17.543      element at address: 0x200028200000 with size:    0.410034 MiB
00:09:17.543      element at address: 0x200000800000 with size:    0.355042 MiB
00:09:17.543    list of standard malloc elements. size: 199.218628 MiB
00:09:17.543      element at address: 0x20000a7fff80 with size:  132.000122 MiB
00:09:17.543      element at address: 0x2000065fff80 with size:   64.000122 MiB
00:09:17.543      element at address: 0x2000192fff80 with size:    1.000122 MiB
00:09:17.543      element at address: 0x2000194fff80 with size:    1.000122 MiB
00:09:17.543      element at address: 0x2000196fff80 with size:    1.000122 MiB
00:09:17.543      element at address: 0x2000003d9f00 with size:    0.140747 MiB
00:09:17.543      element at address: 0x2000196eff00 with size:    0.062622 MiB
00:09:17.543      element at address: 0x2000003fdf80 with size:    0.007935 MiB
00:09:17.543      element at address: 0x2000196efdc0 with size:    0.000305 MiB
00:09:17.543      element at address: 0x2000002d7c40 with size:    0.000183 MiB
00:09:17.543      element at address: 0x2000003d9e40 with size:    0.000183 MiB
00:09:17.543      element at address: 0x2000004ffa00 with size:    0.000183 MiB
00:09:17.543      element at address: 0x2000004ffac0 with size:    0.000183 MiB
00:09:17.543      element at address: 0x2000004ffb80 with size:    0.000183 MiB
00:09:17.543      element at address: 0x2000004ffd80 with size:    0.000183 MiB
00:09:17.543      element at address: 0x2000004ffe40 with size:    0.000183 MiB
00:09:17.543      element at address: 0x20000085ae40 with size:    0.000183 MiB
00:09:17.543      element at address: 0x20000085b040 with size:    0.000183 MiB
00:09:17.543      element at address: 0x20000085f300 with size:    0.000183 MiB
00:09:17.543      element at address: 0x20000087f5c0 with size:    0.000183 MiB
00:09:17.543      element at address: 0x20000087f680 with size:    0.000183 MiB
00:09:17.543      element at address: 0x2000008ff940 with size:    0.000183 MiB
00:09:17.543      element at address: 0x2000008ffb40 with size:    0.000183 MiB
00:09:17.543      element at address: 0x200000c7ed40 with size:    0.000183 MiB
00:09:17.543      element at address: 0x200000cff000 with size:    0.000183 MiB
00:09:17.543      element at address: 0x200000cff0c0 with size:    0.000183 MiB
00:09:17.543      element at address: 0x200003e7b600 with size:    0.000183 MiB
00:09:17.543      element at address: 0x200003e7b6c0 with size:    0.000183 MiB
00:09:17.543      element at address: 0x200003efb980 with size:    0.000183 MiB
00:09:17.543      element at address: 0x2000064fdd80 with size:    0.000183 MiB
00:09:17.543      element at address: 0x20000a67da00 with size:    0.000183 MiB
00:09:17.543      element at address: 0x20000a67dac0 with size:    0.000183 MiB
00:09:17.543      element at address: 0x20000a6fdd80 with size:    0.000183 MiB
00:09:17.543      element at address: 0x200012cf1bc0 with size:    0.000183 MiB
00:09:17.543      element at address: 0x2000196efc40 with size:    0.000183 MiB
00:09:17.543      element at address: 0x2000196efd00 with size:    0.000183 MiB
00:09:17.543      element at address: 0x2000198bc740 with size:    0.000183 MiB
00:09:17.543      element at address: 0x20001ae95380 with size:    0.000183 MiB
00:09:17.543      element at address: 0x20001ae95440 with size:    0.000183 MiB
00:09:17.543      element at address: 0x200028268f80 with size:    0.000183 MiB
00:09:17.543      element at address: 0x200028269040 with size:    0.000183 MiB
00:09:17.543      element at address: 0x20002826fc40 with size:    0.000183 MiB
00:09:17.543      element at address: 0x20002826fe40 with size:    0.000183 MiB
00:09:17.543      element at address: 0x20002826ff00 with size:    0.000183 MiB
00:09:17.543    list of memzone associated elements. size: 607.928894 MiB
00:09:17.543      element at address: 0x20001ae95500 with size:  211.416748 MiB
00:09:17.543        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:09:17.543      element at address: 0x20002826ffc0 with size:  157.562561 MiB
00:09:17.543        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:09:17.543      element at address: 0x200012df1e80 with size:  100.055054 MiB
00:09:17.543        associated memzone info: size:  100.054932 MiB name: MP_bdev_io_2903141_0
00:09:17.543      element at address: 0x200000dff380 with size:   48.003052 MiB
00:09:17.543        associated memzone info: size:   48.002930 MiB name: MP_msgpool_2903141_0
00:09:17.543      element at address: 0x200003ffdb80 with size:   36.008911 MiB
00:09:17.543        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_2903141_0
00:09:17.543      element at address: 0x2000199be940 with size:   20.255554 MiB
00:09:17.543        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:09:17.543      element at address: 0x2000321feb40 with size:   18.005066 MiB
00:09:17.543        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:09:17.543      element at address: 0x2000004fff00 with size:    3.000244 MiB
00:09:17.543        associated memzone info: size:    3.000122 MiB name: MP_evtpool_2903141_0
00:09:17.543      element at address: 0x2000009ffe00 with size:    2.000488 MiB
00:09:17.543        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_2903141
00:09:17.543      element at address: 0x2000002d7d00 with size:    1.008118 MiB
00:09:17.543        associated memzone info: size:    1.007996 MiB name: MP_evtpool_2903141
00:09:17.543      element at address: 0x20000a6fde40 with size:    1.008118 MiB
00:09:17.543        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:09:17.543      element at address: 0x2000198bc800 with size:    1.008118 MiB
00:09:17.543        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:09:17.543      element at address: 0x2000064fde40 with size:    1.008118 MiB
00:09:17.543        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:09:17.543      element at address: 0x200003efba40 with size:    1.008118 MiB
00:09:17.543        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:09:17.543      element at address: 0x200000cff180 with size:    1.000488 MiB
00:09:17.543        associated memzone info: size:    1.000366 MiB name: RG_ring_0_2903141
00:09:17.543      element at address: 0x2000008ffc00 with size:    1.000488 MiB
00:09:17.543        associated memzone info: size:    1.000366 MiB name: RG_ring_1_2903141
00:09:17.543      element at address: 0x200012cf1c80 with size:    1.000488 MiB
00:09:17.543        associated memzone info: size:    1.000366 MiB name: RG_ring_4_2903141
00:09:17.543      element at address: 0x2000320fe940 with size:    1.000488 MiB
00:09:17.543        associated memzone info: size:    1.000366 MiB name: RG_ring_5_2903141
00:09:17.543      element at address: 0x20000087f740 with size:    0.500488 MiB
00:09:17.543        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_2903141
00:09:17.543      element at address: 0x200000c7ee00 with size:    0.500488 MiB
00:09:17.543        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_2903141
00:09:17.543      element at address: 0x20000a67db80 with size:    0.500488 MiB
00:09:17.543        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:09:17.543      element at address: 0x200003e7b780 with size:    0.500488 MiB
00:09:17.543        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:09:17.543      element at address: 0x20001987c540 with size:    0.250488 MiB
00:09:17.543        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:09:17.543      element at address: 0x2000002b7a40 with size:    0.125488 MiB
00:09:17.543        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_2903141
00:09:17.543      element at address: 0x20000085f3c0 with size:    0.125488 MiB
00:09:17.543        associated memzone info: size:    0.125366 MiB name: RG_ring_2_2903141
00:09:17.543      element at address: 0x2000064f5b80 with size:    0.031738 MiB
00:09:17.543        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:09:17.543      element at address: 0x200028269100 with size:    0.023743 MiB
00:09:17.543        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:09:17.543      element at address: 0x20000085b100 with size:    0.016113 MiB
00:09:17.543        associated memzone info: size:    0.015991 MiB name: RG_ring_3_2903141
00:09:17.543      element at address: 0x20002826f240 with size:    0.002441 MiB
00:09:17.543        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:09:17.543      element at address: 0x2000004ffc40 with size:    0.000305 MiB
00:09:17.543        associated memzone info: size:    0.000183 MiB name: MP_msgpool_2903141
00:09:17.543      element at address: 0x2000008ffa00 with size:    0.000305 MiB
00:09:17.543        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_2903141
00:09:17.543      element at address: 0x20000085af00 with size:    0.000305 MiB
00:09:17.543        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_2903141
00:09:17.543      element at address: 0x20002826fd00 with size:    0.000305 MiB
00:09:17.543        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:09:17.543   23:50:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:09:17.543   23:50:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2903141
00:09:17.543   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2903141 ']'
00:09:17.543   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2903141
00:09:17.543    23:50:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:09:17.543   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:17.543    23:50:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2903141
00:09:17.543   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:17.543   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:17.543   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2903141'
00:09:17.543  killing process with pid 2903141
00:09:17.543   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2903141
00:09:17.543   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2903141
00:09:18.111  
00:09:18.111  real	0m0.996s
00:09:18.111  user	0m0.937s
00:09:18.111  sys	0m0.390s
00:09:18.111   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:18.111   23:50:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:18.111  ************************************
00:09:18.111  END TEST dpdk_mem_utility
00:09:18.111  ************************************
00:09:18.111   23:50:33  -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh
00:09:18.111   23:50:33  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:18.111   23:50:33  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:18.111   23:50:33  -- common/autotest_common.sh@10 -- # set +x
00:09:18.111  ************************************
00:09:18.111  START TEST event
00:09:18.111  ************************************
00:09:18.111   23:50:33 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh
00:09:18.111  * Looking for test storage...
00:09:18.111  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event
00:09:18.111    23:50:33 event -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:18.111     23:50:33 event -- common/autotest_common.sh@1711 -- # lcov --version
00:09:18.111     23:50:33 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:18.111    23:50:33 event -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:18.111    23:50:33 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:18.111    23:50:33 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:18.111    23:50:33 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:18.111    23:50:33 event -- scripts/common.sh@336 -- # IFS=.-:
00:09:18.111    23:50:33 event -- scripts/common.sh@336 -- # read -ra ver1
00:09:18.111    23:50:33 event -- scripts/common.sh@337 -- # IFS=.-:
00:09:18.111    23:50:33 event -- scripts/common.sh@337 -- # read -ra ver2
00:09:18.111    23:50:33 event -- scripts/common.sh@338 -- # local 'op=<'
00:09:18.111    23:50:33 event -- scripts/common.sh@340 -- # ver1_l=2
00:09:18.111    23:50:33 event -- scripts/common.sh@341 -- # ver2_l=1
00:09:18.111    23:50:33 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:18.111    23:50:33 event -- scripts/common.sh@344 -- # case "$op" in
00:09:18.111    23:50:33 event -- scripts/common.sh@345 -- # : 1
00:09:18.111    23:50:33 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:18.111    23:50:33 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:18.111     23:50:33 event -- scripts/common.sh@365 -- # decimal 1
00:09:18.111     23:50:33 event -- scripts/common.sh@353 -- # local d=1
00:09:18.111     23:50:33 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:18.111     23:50:33 event -- scripts/common.sh@355 -- # echo 1
00:09:18.111    23:50:33 event -- scripts/common.sh@365 -- # ver1[v]=1
00:09:18.111     23:50:33 event -- scripts/common.sh@366 -- # decimal 2
00:09:18.111     23:50:33 event -- scripts/common.sh@353 -- # local d=2
00:09:18.111     23:50:33 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:18.111     23:50:33 event -- scripts/common.sh@355 -- # echo 2
00:09:18.111    23:50:33 event -- scripts/common.sh@366 -- # ver2[v]=2
00:09:18.111    23:50:33 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:18.111    23:50:33 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:18.111    23:50:33 event -- scripts/common.sh@368 -- # return 0
00:09:18.111    23:50:33 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:18.111    23:50:33 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:18.111  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:18.111  		--rc genhtml_branch_coverage=1
00:09:18.111  		--rc genhtml_function_coverage=1
00:09:18.111  		--rc genhtml_legend=1
00:09:18.111  		--rc geninfo_all_blocks=1
00:09:18.111  		--rc geninfo_unexecuted_blocks=1
00:09:18.111  		
00:09:18.111  		'
00:09:18.111    23:50:33 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:18.111  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:18.111  		--rc genhtml_branch_coverage=1
00:09:18.111  		--rc genhtml_function_coverage=1
00:09:18.111  		--rc genhtml_legend=1
00:09:18.111  		--rc geninfo_all_blocks=1
00:09:18.111  		--rc geninfo_unexecuted_blocks=1
00:09:18.111  		
00:09:18.111  		'
00:09:18.111    23:50:33 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:18.111  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:18.111  		--rc genhtml_branch_coverage=1
00:09:18.111  		--rc genhtml_function_coverage=1
00:09:18.111  		--rc genhtml_legend=1
00:09:18.111  		--rc geninfo_all_blocks=1
00:09:18.111  		--rc geninfo_unexecuted_blocks=1
00:09:18.111  		
00:09:18.111  		'
00:09:18.111    23:50:33 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:18.111  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:18.111  		--rc genhtml_branch_coverage=1
00:09:18.111  		--rc genhtml_function_coverage=1
00:09:18.111  		--rc genhtml_legend=1
00:09:18.111  		--rc geninfo_all_blocks=1
00:09:18.111  		--rc geninfo_unexecuted_blocks=1
00:09:18.111  		
00:09:18.111  		'
00:09:18.111   23:50:33 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh
00:09:18.111    23:50:33 event -- bdev/nbd_common.sh@6 -- # set -e
00:09:18.111   23:50:33 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:09:18.111   23:50:33 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:09:18.111   23:50:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:18.111   23:50:33 event -- common/autotest_common.sh@10 -- # set +x
00:09:18.111  ************************************
00:09:18.112  START TEST event_perf
00:09:18.112  ************************************
00:09:18.112   23:50:33 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:09:18.112  Running I/O for 1 seconds...[2024-12-09 23:50:33.957832] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:18.112  [2024-12-09 23:50:33.957916] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903285 ]
00:09:18.370  [2024-12-09 23:50:34.033136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:18.370  [2024-12-09 23:50:34.075552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:18.370  [2024-12-09 23:50:34.075661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:18.370  [2024-12-09 23:50:34.075759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:18.370  [2024-12-09 23:50:34.075759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:19.307  Running I/O for 1 seconds...
00:09:19.307  lcore  0:   207008
00:09:19.307  lcore  1:   207007
00:09:19.307  lcore  2:   207008
00:09:19.307  lcore  3:   207008
00:09:19.307  done.
00:09:19.307  
00:09:19.307  real	0m1.179s
00:09:19.307  user	0m4.104s
00:09:19.307  sys	0m0.073s
00:09:19.307   23:50:35 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:19.307   23:50:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:09:19.307  ************************************
00:09:19.307  END TEST event_perf
00:09:19.307  ************************************
00:09:19.307   23:50:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:09:19.307   23:50:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:09:19.307   23:50:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:19.307   23:50:35 event -- common/autotest_common.sh@10 -- # set +x
00:09:19.566  ************************************
00:09:19.566  START TEST event_reactor
00:09:19.566  ************************************
00:09:19.566   23:50:35 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1
00:09:19.566  [2024-12-09 23:50:35.195233] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:19.566  [2024-12-09 23:50:35.195272] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903478 ]
00:09:19.566  [2024-12-09 23:50:35.271049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:19.566  [2024-12-09 23:50:35.311257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:20.502  test_start
00:09:20.502  oneshot
00:09:20.502  tick 100
00:09:20.502  tick 100
00:09:20.502  tick 250
00:09:20.502  tick 100
00:09:20.502  tick 100
00:09:20.502  tick 100
00:09:20.502  tick 250
00:09:20.502  tick 500
00:09:20.502  tick 100
00:09:20.502  tick 100
00:09:20.502  tick 250
00:09:20.502  tick 100
00:09:20.502  tick 100
00:09:20.502  test_end
00:09:20.502  
00:09:20.502  real	0m1.163s
00:09:20.502  user	0m1.087s
00:09:20.502  sys	0m0.073s
00:09:20.502   23:50:36 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:20.502   23:50:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:09:20.502  ************************************
00:09:20.502  END TEST event_reactor
00:09:20.502  ************************************
00:09:20.762   23:50:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:09:20.762   23:50:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:09:20.762   23:50:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:20.762   23:50:36 event -- common/autotest_common.sh@10 -- # set +x
00:09:20.762  ************************************
00:09:20.762  START TEST event_reactor_perf
00:09:20.762  ************************************
00:09:20.762   23:50:36 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1
00:09:20.762  [2024-12-09 23:50:36.439710] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:20.762  [2024-12-09 23:50:36.439777] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903719 ]
00:09:20.762  [2024-12-09 23:50:36.517453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:20.762  [2024-12-09 23:50:36.556417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:22.138  test_start
00:09:22.138  test_end
00:09:22.138  Performance:   515454 events per second
00:09:22.138  
00:09:22.138  real	0m1.176s
00:09:22.138  user	0m1.097s
00:09:22.138  sys	0m0.075s
00:09:22.138   23:50:37 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:22.138   23:50:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:09:22.138  ************************************
00:09:22.138  END TEST event_reactor_perf
00:09:22.138  ************************************
00:09:22.138    23:50:37 event -- event/event.sh@49 -- # uname -s
00:09:22.138   23:50:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:09:22.138   23:50:37 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:09:22.138   23:50:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:22.138   23:50:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:22.138   23:50:37 event -- common/autotest_common.sh@10 -- # set +x
00:09:22.138  ************************************
00:09:22.138  START TEST event_scheduler
00:09:22.138  ************************************
00:09:22.138   23:50:37 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh
00:09:22.138  * Looking for test storage...
00:09:22.138  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler
00:09:22.138    23:50:37 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:22.138     23:50:37 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version
00:09:22.138     23:50:37 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:22.138    23:50:37 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:22.138     23:50:37 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:09:22.138     23:50:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:09:22.138     23:50:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:22.138     23:50:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:09:22.138     23:50:37 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:09:22.138     23:50:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:09:22.138     23:50:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:22.138     23:50:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:22.138    23:50:37 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:09:22.138    23:50:37 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:22.138    23:50:37 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:22.138  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.138  		--rc genhtml_branch_coverage=1
00:09:22.138  		--rc genhtml_function_coverage=1
00:09:22.138  		--rc genhtml_legend=1
00:09:22.138  		--rc geninfo_all_blocks=1
00:09:22.138  		--rc geninfo_unexecuted_blocks=1
00:09:22.138  		
00:09:22.138  		'
00:09:22.138    23:50:37 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:22.138  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.138  		--rc genhtml_branch_coverage=1
00:09:22.138  		--rc genhtml_function_coverage=1
00:09:22.138  		--rc genhtml_legend=1
00:09:22.138  		--rc geninfo_all_blocks=1
00:09:22.138  		--rc geninfo_unexecuted_blocks=1
00:09:22.138  		
00:09:22.138  		'
00:09:22.138    23:50:37 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:22.138  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.138  		--rc genhtml_branch_coverage=1
00:09:22.138  		--rc genhtml_function_coverage=1
00:09:22.138  		--rc genhtml_legend=1
00:09:22.138  		--rc geninfo_all_blocks=1
00:09:22.138  		--rc geninfo_unexecuted_blocks=1
00:09:22.138  		
00:09:22.138  		'
00:09:22.138    23:50:37 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:22.138  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:22.138  		--rc genhtml_branch_coverage=1
00:09:22.138  		--rc genhtml_function_coverage=1
00:09:22.138  		--rc genhtml_legend=1
00:09:22.138  		--rc geninfo_all_blocks=1
00:09:22.138  		--rc geninfo_unexecuted_blocks=1
00:09:22.138  		
00:09:22.138  		'
00:09:22.138   23:50:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:09:22.138   23:50:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2903994
00:09:22.138   23:50:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:09:22.138   23:50:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:09:22.138   23:50:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2903994
00:09:22.138   23:50:37 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2903994 ']'
00:09:22.138   23:50:37 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:22.138   23:50:37 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:22.138   23:50:37 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:22.138  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:22.138   23:50:37 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:22.138   23:50:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:22.138  [2024-12-09 23:50:37.888047] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:22.138  [2024-12-09 23:50:37.888094] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903994 ]
00:09:22.138  [2024-12-09 23:50:37.962919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:22.397  [2024-12-09 23:50:38.008064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:22.397  [2024-12-09 23:50:38.008184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:22.397  [2024-12-09 23:50:38.008256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:22.397  [2024-12-09 23:50:38.008257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:22.397   23:50:38 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:22.397   23:50:38 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:09:22.397   23:50:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:09:22.397   23:50:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.397   23:50:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:22.397  [2024-12-09 23:50:38.044853] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings
00:09:22.397  [2024-12-09 23:50:38.044872] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:09:22.397  [2024-12-09 23:50:38.044881] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:09:22.397  [2024-12-09 23:50:38.044887] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:09:22.397  [2024-12-09 23:50:38.044892] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:09:22.397   23:50:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:22.397   23:50:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:09:22.397   23:50:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.397   23:50:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:22.397  [2024-12-09 23:50:38.120503] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:09:22.397   23:50:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:22.397   23:50:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:09:22.397   23:50:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:22.397   23:50:38 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:22.397   23:50:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:22.397  ************************************
00:09:22.397  START TEST scheduler_create_thread
00:09:22.397  ************************************
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:22.397  2
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:22.397  3
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:22.397  4
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:22.397  5
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:22.397  6
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:22.397   23:50:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:22.398  7
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:22.398  8
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:22.398  9
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:22.398  10
00:09:22.398   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:22.398    23:50:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:09:22.398    23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.398    23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:22.398    23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:22.657   23:50:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:09:22.657   23:50:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:09:22.657   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.657   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:22.657   23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:22.657    23:50:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:09:22.657    23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:22.657    23:50:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:24.033    23:50:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:24.033   23:50:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:09:24.033   23:50:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:09:24.033   23:50:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:24.033   23:50:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:24.969   23:50:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:24.969  
00:09:24.969  real	0m2.621s
00:09:24.969  user	0m0.023s
00:09:24.969  sys	0m0.005s
00:09:24.969   23:50:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:24.969   23:50:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:24.969  ************************************
00:09:24.969  END TEST scheduler_create_thread
00:09:24.969  ************************************
00:09:24.969   23:50:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:09:24.969   23:50:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2903994
00:09:24.969   23:50:40 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2903994 ']'
00:09:24.969   23:50:40 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2903994
00:09:24.969    23:50:40 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:09:24.969   23:50:40 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:24.969    23:50:40 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2903994
00:09:25.228   23:50:40 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:09:25.228   23:50:40 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:09:25.228   23:50:40 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2903994'
00:09:25.228  killing process with pid 2903994
00:09:25.228   23:50:40 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2903994
00:09:25.228   23:50:40 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2903994
00:09:25.487  [2024-12-09 23:50:41.258668] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:09:25.746  
00:09:25.746  real	0m3.759s
00:09:25.746  user	0m5.602s
00:09:25.746  sys	0m0.367s
00:09:25.746   23:50:41 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:25.746   23:50:41 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:25.746  ************************************
00:09:25.746  END TEST event_scheduler
00:09:25.746  ************************************
00:09:25.746   23:50:41 event -- event/event.sh@51 -- # modprobe -n nbd
00:09:25.746   23:50:41 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:09:25.746   23:50:41 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:25.746   23:50:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:25.746   23:50:41 event -- common/autotest_common.sh@10 -- # set +x
00:09:25.746  ************************************
00:09:25.746  START TEST app_repeat
00:09:25.746  ************************************
00:09:25.746   23:50:41 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2904713
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2904713'
00:09:25.746  Process app_repeat pid: 2904713
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:09:25.746  spdk_app_start Round 0
00:09:25.746   23:50:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2904713 /var/tmp/spdk-nbd.sock
00:09:25.746   23:50:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2904713 ']'
00:09:25.746   23:50:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:25.746   23:50:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:25.746   23:50:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:25.746  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:25.746   23:50:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:25.746   23:50:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:25.746  [2024-12-09 23:50:41.536906] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:25.746  [2024-12-09 23:50:41.536957] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904713 ]
00:09:26.006  [2024-12-09 23:50:41.613028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:26.006  [2024-12-09 23:50:41.651373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:26.006  [2024-12-09 23:50:41.651374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:26.006   23:50:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:26.006   23:50:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:09:26.006   23:50:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:26.264  Malloc0
00:09:26.264   23:50:41 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:26.523  Malloc1
00:09:26.523   23:50:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:26.523   23:50:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:26.523   23:50:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:26.523   23:50:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:26.523   23:50:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:26.523   23:50:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:26.523   23:50:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:26.523   23:50:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:26.524   23:50:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:26.524   23:50:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:26.524   23:50:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:26.524   23:50:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:26.524   23:50:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:09:26.524   23:50:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:26.524   23:50:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:26.524   23:50:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:26.783  /dev/nbd0
00:09:26.783    23:50:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:26.783   23:50:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:26.783   23:50:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:26.783   23:50:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:26.783   23:50:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:26.783   23:50:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:26.783   23:50:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:26.783   23:50:42 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:26.783   23:50:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:26.783   23:50:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:26.783   23:50:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:26.783  1+0 records in
00:09:26.783  1+0 records out
00:09:26.783  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224558 s, 18.2 MB/s
00:09:26.783    23:50:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:09:26.783   23:50:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:26.783   23:50:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:09:26.783   23:50:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:26.783   23:50:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:26.783   23:50:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:26.783   23:50:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:26.783   23:50:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:26.783  /dev/nbd1
00:09:27.042    23:50:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:27.042   23:50:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:27.042   23:50:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:27.042   23:50:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:27.042   23:50:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:27.042   23:50:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:27.042   23:50:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:27.042   23:50:42 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:27.042   23:50:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:27.042   23:50:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:27.042   23:50:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:27.042  1+0 records in
00:09:27.042  1+0 records out
00:09:27.042  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000232326 s, 17.6 MB/s
00:09:27.042    23:50:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:09:27.042   23:50:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:27.042   23:50:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:09:27.042   23:50:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:27.042   23:50:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:27.042   23:50:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:27.042   23:50:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:27.042    23:50:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:27.042    23:50:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:27.042     23:50:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:27.042    23:50:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:27.042    {
00:09:27.042      "nbd_device": "/dev/nbd0",
00:09:27.042      "bdev_name": "Malloc0"
00:09:27.042    },
00:09:27.042    {
00:09:27.042      "nbd_device": "/dev/nbd1",
00:09:27.042      "bdev_name": "Malloc1"
00:09:27.042    }
00:09:27.042  ]'
00:09:27.042     23:50:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:09:27.042    {
00:09:27.042      "nbd_device": "/dev/nbd0",
00:09:27.042      "bdev_name": "Malloc0"
00:09:27.042    },
00:09:27.042    {
00:09:27.042      "nbd_device": "/dev/nbd1",
00:09:27.042      "bdev_name": "Malloc1"
00:09:27.042    }
00:09:27.042  ]'
00:09:27.042     23:50:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:27.301    23:50:42 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:27.301  /dev/nbd1'
00:09:27.301     23:50:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:27.301  /dev/nbd1'
00:09:27.301     23:50:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:27.301    23:50:42 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:09:27.301    23:50:42 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:27.301  256+0 records in
00:09:27.301  256+0 records out
00:09:27.301  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106326 s, 98.6 MB/s
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:27.301  256+0 records in
00:09:27.301  256+0 records out
00:09:27.301  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140936 s, 74.4 MB/s
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:27.301  256+0 records in
00:09:27.301  256+0 records out
00:09:27.301  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145013 s, 72.3 MB/s
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:27.301   23:50:42 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:27.560    23:50:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:27.560   23:50:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:27.560   23:50:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:27.560   23:50:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:27.560   23:50:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:27.560   23:50:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:27.560   23:50:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:27.560   23:50:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:27.560   23:50:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:27.560   23:50:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:27.560    23:50:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:27.560   23:50:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:27.560   23:50:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:27.560   23:50:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:27.819   23:50:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:27.819   23:50:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:27.819   23:50:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:27.819   23:50:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:27.819    23:50:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:27.819    23:50:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:27.819     23:50:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:27.819    23:50:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:27.819     23:50:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:27.819     23:50:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:27.819    23:50:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:27.819     23:50:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:09:27.819     23:50:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:27.819     23:50:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:09:27.819    23:50:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:09:27.819    23:50:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:09:28.099   23:50:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:09:28.099   23:50:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:28.099   23:50:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:09:28.099   23:50:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:28.099   23:50:43 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:09:28.357  [2024-12-09 23:50:44.031727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:28.357  [2024-12-09 23:50:44.067544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:28.357  [2024-12-09 23:50:44.067546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:28.357  [2024-12-09 23:50:44.107914] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:28.357  [2024-12-09 23:50:44.107953] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:09:31.644   23:50:46 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:09:31.644   23:50:46 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:09:31.644  spdk_app_start Round 1
00:09:31.644   23:50:46 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2904713 /var/tmp/spdk-nbd.sock
00:09:31.644   23:50:46 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2904713 ']'
00:09:31.644   23:50:46 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:31.644   23:50:46 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:31.644   23:50:46 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:31.644  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:31.644   23:50:46 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:31.644   23:50:46 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:31.644   23:50:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:31.644   23:50:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:09:31.644   23:50:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:31.644  Malloc0
00:09:31.644   23:50:47 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:31.644  Malloc1
00:09:31.904   23:50:47 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:31.904  /dev/nbd0
00:09:31.904    23:50:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:31.904   23:50:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:31.904   23:50:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:31.904   23:50:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:31.904   23:50:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:31.904   23:50:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:31.904   23:50:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:32.163  1+0 records in
00:09:32.163  1+0 records out
00:09:32.163  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197218 s, 20.8 MB/s
00:09:32.163    23:50:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:32.163   23:50:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:32.163   23:50:47 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:32.163   23:50:47 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:32.163  /dev/nbd1
00:09:32.163    23:50:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:32.163   23:50:47 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:32.163   23:50:47 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:32.163  1+0 records in
00:09:32.163  1+0 records out
00:09:32.163  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231988 s, 17.7 MB/s
00:09:32.163    23:50:47 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:09:32.163   23:50:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:32.163   23:50:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:09:32.163   23:50:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:32.163   23:50:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:32.163   23:50:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:32.163   23:50:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:32.163    23:50:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:32.163    23:50:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:32.163     23:50:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:32.422    23:50:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:32.422    {
00:09:32.422      "nbd_device": "/dev/nbd0",
00:09:32.422      "bdev_name": "Malloc0"
00:09:32.422    },
00:09:32.422    {
00:09:32.422      "nbd_device": "/dev/nbd1",
00:09:32.422      "bdev_name": "Malloc1"
00:09:32.422    }
00:09:32.422  ]'
00:09:32.422     23:50:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:09:32.422    {
00:09:32.422      "nbd_device": "/dev/nbd0",
00:09:32.422      "bdev_name": "Malloc0"
00:09:32.422    },
00:09:32.422    {
00:09:32.422      "nbd_device": "/dev/nbd1",
00:09:32.422      "bdev_name": "Malloc1"
00:09:32.422    }
00:09:32.422  ]'
00:09:32.422     23:50:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:32.422    23:50:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:32.422  /dev/nbd1'
00:09:32.422     23:50:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:32.422  /dev/nbd1'
00:09:32.422     23:50:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:32.422    23:50:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:09:32.422    23:50:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:09:32.422   23:50:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:09:32.422   23:50:48 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:32.422   23:50:48 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:32.422   23:50:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:32.422   23:50:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:32.422   23:50:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:09:32.422   23:50:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:09:32.422   23:50:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:32.422   23:50:48 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:32.422  256+0 records in
00:09:32.422  256+0 records out
00:09:32.422  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106725 s, 98.3 MB/s
00:09:32.422   23:50:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:32.422   23:50:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:32.681  256+0 records in
00:09:32.681  256+0 records out
00:09:32.681  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139815 s, 75.0 MB/s
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:32.681  256+0 records in
00:09:32.681  256+0 records out
00:09:32.681  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149107 s, 70.3 MB/s
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:32.681    23:50:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:32.681   23:50:48 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:32.940    23:50:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:32.940   23:50:48 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:32.940   23:50:48 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:32.940   23:50:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:32.940   23:50:48 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:32.940   23:50:48 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:32.940   23:50:48 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:32.940   23:50:48 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:32.940    23:50:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:32.940    23:50:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:32.940     23:50:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:33.199    23:50:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:33.199     23:50:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:33.199     23:50:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:33.199    23:50:48 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:33.199     23:50:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:33.199     23:50:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:09:33.199     23:50:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:09:33.199    23:50:48 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:09:33.199    23:50:48 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:09:33.199   23:50:48 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:09:33.199   23:50:48 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:33.199   23:50:48 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:09:33.199   23:50:48 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:33.458   23:50:49 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:09:33.717  [2024-12-09 23:50:49.328545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:33.717  [2024-12-09 23:50:49.364820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:33.717  [2024-12-09 23:50:49.364833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:33.717  [2024-12-09 23:50:49.406321] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:33.717  [2024-12-09 23:50:49.406359] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:09:36.405   23:50:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:09:36.405   23:50:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:09:36.405  spdk_app_start Round 2
00:09:36.405   23:50:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2904713 /var/tmp/spdk-nbd.sock
00:09:36.405   23:50:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2904713 ']'
00:09:36.405   23:50:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:36.405   23:50:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:36.405   23:50:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:36.405  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:36.405   23:50:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:36.405   23:50:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:36.664   23:50:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:36.664   23:50:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:09:36.664   23:50:52 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:36.923  Malloc0
00:09:36.923   23:50:52 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:36.923  Malloc1
00:09:37.182   23:50:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:37.182   23:50:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:37.182  /dev/nbd0
00:09:37.182    23:50:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:37.182   23:50:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:37.182   23:50:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:37.182   23:50:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:37.182   23:50:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:37.182   23:50:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:37.182   23:50:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:37.182   23:50:53 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:37.441  1+0 records in
00:09:37.441  1+0 records out
00:09:37.441  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240275 s, 17.0 MB/s
00:09:37.441    23:50:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:37.441   23:50:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:37.441   23:50:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:37.441   23:50:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:37.441  /dev/nbd1
00:09:37.441    23:50:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:37.441   23:50:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:37.441  1+0 records in
00:09:37.441  1+0 records out
00:09:37.441  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236846 s, 17.3 MB/s
00:09:37.441    23:50:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest
00:09:37.441   23:50:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:37.700   23:50:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:37.700   23:50:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:37.700   23:50:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:37.700    23:50:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:37.700    23:50:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:37.700     23:50:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:37.700    23:50:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:37.700    {
00:09:37.700      "nbd_device": "/dev/nbd0",
00:09:37.700      "bdev_name": "Malloc0"
00:09:37.700    },
00:09:37.700    {
00:09:37.700      "nbd_device": "/dev/nbd1",
00:09:37.700      "bdev_name": "Malloc1"
00:09:37.700    }
00:09:37.700  ]'
00:09:37.700     23:50:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:09:37.700    {
00:09:37.700      "nbd_device": "/dev/nbd0",
00:09:37.700      "bdev_name": "Malloc0"
00:09:37.701    },
00:09:37.701    {
00:09:37.701      "nbd_device": "/dev/nbd1",
00:09:37.701      "bdev_name": "Malloc1"
00:09:37.701    }
00:09:37.701  ]'
00:09:37.701     23:50:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:37.701    23:50:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:37.701  /dev/nbd1'
00:09:37.701     23:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:37.701  /dev/nbd1'
00:09:37.701     23:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:37.701    23:50:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:09:37.701    23:50:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:09:37.701   23:50:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:09:37.701   23:50:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:37.701   23:50:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:37.701   23:50:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:37.701   23:50:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:37.701   23:50:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:09:37.701   23:50:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:09:37.701   23:50:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:37.701   23:50:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:37.959  256+0 records in
00:09:37.959  256+0 records out
00:09:37.959  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108048 s, 97.0 MB/s
00:09:37.959   23:50:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:37.959   23:50:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:37.959  256+0 records in
00:09:37.960  256+0 records out
00:09:37.960  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130585 s, 80.3 MB/s
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:37.960  256+0 records in
00:09:37.960  256+0 records out
00:09:37.960  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145075 s, 72.3 MB/s
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:37.960   23:50:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:37.960    23:50:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:38.218   23:50:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:38.218   23:50:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:38.218   23:50:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:38.218   23:50:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:38.218   23:50:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:38.218   23:50:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:38.218   23:50:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:38.218   23:50:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:38.218   23:50:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:38.218    23:50:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:38.218   23:50:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:38.218   23:50:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:38.218   23:50:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:38.218   23:50:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:38.218   23:50:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:38.218   23:50:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:38.218   23:50:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:38.218    23:50:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:38.218    23:50:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:38.218     23:50:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:38.477    23:50:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:38.477     23:50:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:38.477     23:50:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:38.477    23:50:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:38.477     23:50:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:09:38.477     23:50:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:38.477     23:50:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:09:38.477    23:50:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:09:38.477    23:50:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:09:38.477   23:50:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:09:38.477   23:50:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:38.477   23:50:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:09:38.477   23:50:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:38.736   23:50:54 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:09:38.995  [2024-12-09 23:50:54.654816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:38.995  [2024-12-09 23:50:54.691179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:38.995  [2024-12-09 23:50:54.691181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:38.995  [2024-12-09 23:50:54.731916] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:38.995  [2024-12-09 23:50:54.731957] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:09:42.279   23:50:57 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2904713 /var/tmp/spdk-nbd.sock
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2904713 ']'
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:42.279  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:09:42.279   23:50:57 event.app_repeat -- event/event.sh@39 -- # killprocess 2904713
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2904713 ']'
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2904713
00:09:42.279    23:50:57 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:42.279    23:50:57 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2904713
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2904713'
00:09:42.279  killing process with pid 2904713
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2904713
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2904713
00:09:42.279  spdk_app_start is called in Round 0.
00:09:42.279  Shutdown signal received, stop current app iteration
00:09:42.279  Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 reinitialization...
00:09:42.279  spdk_app_start is called in Round 1.
00:09:42.279  Shutdown signal received, stop current app iteration
00:09:42.279  Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 reinitialization...
00:09:42.279  spdk_app_start is called in Round 2.
00:09:42.279  Shutdown signal received, stop current app iteration
00:09:42.279  Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 reinitialization...
00:09:42.279  spdk_app_start is called in Round 3.
00:09:42.279  Shutdown signal received, stop current app iteration
00:09:42.279   23:50:57 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:09:42.279   23:50:57 event.app_repeat -- event/event.sh@42 -- # return 0
00:09:42.279  
00:09:42.279  real	0m16.395s
00:09:42.279  user	0m36.137s
00:09:42.279  sys	0m2.501s
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:42.279   23:50:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:42.279  ************************************
00:09:42.279  END TEST app_repeat
00:09:42.279  ************************************
00:09:42.279   23:50:57 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:09:42.279   23:50:57 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh
00:09:42.279   23:50:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:42.279   23:50:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:42.279   23:50:57 event -- common/autotest_common.sh@10 -- # set +x
00:09:42.279  ************************************
00:09:42.279  START TEST cpu_locks
00:09:42.279  ************************************
00:09:42.279   23:50:57 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh
00:09:42.279  * Looking for test storage...
00:09:42.279  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event
00:09:42.279    23:50:58 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:42.279     23:50:58 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version
00:09:42.279     23:50:58 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:42.279    23:50:58 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:42.279     23:50:58 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:09:42.279     23:50:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:09:42.279     23:50:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:42.279     23:50:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:09:42.279    23:50:58 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:09:42.279     23:50:58 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:09:42.538     23:50:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:09:42.538     23:50:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:42.538     23:50:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:09:42.538    23:50:58 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:09:42.538    23:50:58 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:42.538    23:50:58 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:42.538    23:50:58 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:09:42.538    23:50:58 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:42.538    23:50:58 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:42.538  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:42.538  		--rc genhtml_branch_coverage=1
00:09:42.538  		--rc genhtml_function_coverage=1
00:09:42.538  		--rc genhtml_legend=1
00:09:42.538  		--rc geninfo_all_blocks=1
00:09:42.538  		--rc geninfo_unexecuted_blocks=1
00:09:42.538  		
00:09:42.538  		'
00:09:42.538    23:50:58 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:42.538  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:42.538  		--rc genhtml_branch_coverage=1
00:09:42.538  		--rc genhtml_function_coverage=1
00:09:42.538  		--rc genhtml_legend=1
00:09:42.538  		--rc geninfo_all_blocks=1
00:09:42.538  		--rc geninfo_unexecuted_blocks=1
00:09:42.538  		
00:09:42.538  		'
00:09:42.538    23:50:58 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:42.538  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:42.538  		--rc genhtml_branch_coverage=1
00:09:42.538  		--rc genhtml_function_coverage=1
00:09:42.538  		--rc genhtml_legend=1
00:09:42.538  		--rc geninfo_all_blocks=1
00:09:42.538  		--rc geninfo_unexecuted_blocks=1
00:09:42.538  		
00:09:42.538  		'
00:09:42.538    23:50:58 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:42.538  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:42.538  		--rc genhtml_branch_coverage=1
00:09:42.539  		--rc genhtml_function_coverage=1
00:09:42.539  		--rc genhtml_legend=1
00:09:42.539  		--rc geninfo_all_blocks=1
00:09:42.539  		--rc geninfo_unexecuted_blocks=1
00:09:42.539  		
00:09:42.539  		'
00:09:42.539   23:50:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:09:42.539   23:50:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:09:42.539   23:50:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:09:42.539   23:50:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:09:42.539   23:50:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:42.539   23:50:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:42.539   23:50:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:42.539  ************************************
00:09:42.539  START TEST default_locks
00:09:42.539  ************************************
00:09:42.539   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:09:42.539   23:50:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2907716
00:09:42.539   23:50:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2907716
00:09:42.539   23:50:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:09:42.539   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2907716 ']'
00:09:42.539   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:42.539   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:42.539   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:42.539  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:42.539   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:42.539   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:09:42.539  [2024-12-09 23:50:58.235445] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:42.539  [2024-12-09 23:50:58.235492] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2907716 ]
00:09:42.539  [2024-12-09 23:50:58.311643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:42.539  [2024-12-09 23:50:58.351930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:42.797   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:42.797   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:09:42.797   23:50:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2907716
00:09:42.797   23:50:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2907716
00:09:42.797   23:50:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:43.056  lslocks: write error
00:09:43.056   23:50:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2907716
00:09:43.056   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2907716 ']'
00:09:43.056   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2907716
00:09:43.056    23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:09:43.056   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:43.056    23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2907716
00:09:43.056   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:43.056   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:43.056   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2907716'
00:09:43.056  killing process with pid 2907716
00:09:43.056   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2907716
00:09:43.056   23:50:58 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2907716
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2907716
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2907716
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:43.315    23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2907716
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2907716 ']'
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:43.315  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:09:43.315  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2907716) - No such process
00:09:43.315  ERROR: process (pid: 2907716) is no longer running
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:09:43.315  
00:09:43.315  real	0m0.907s
00:09:43.315  user	0m0.848s
00:09:43.315  sys	0m0.428s
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:43.315   23:50:59 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:09:43.315  ************************************
00:09:43.315  END TEST default_locks
00:09:43.316  ************************************
00:09:43.316   23:50:59 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:09:43.316   23:50:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:43.316   23:50:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:43.316   23:50:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:43.316  ************************************
00:09:43.316  START TEST default_locks_via_rpc
00:09:43.316  ************************************
00:09:43.316   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:09:43.316   23:50:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2907894
00:09:43.316   23:50:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2907894
00:09:43.316   23:50:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:09:43.316   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2907894 ']'
00:09:43.316   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:43.316   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:43.316   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:43.316  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:43.316   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:43.316   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:43.575  [2024-12-09 23:50:59.215491] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:43.575  [2024-12-09 23:50:59.215533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2907894 ]
00:09:43.575  [2024-12-09 23:50:59.290555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:43.575  [2024-12-09 23:50:59.331643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2907894
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2907894
00:09:43.833   23:50:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:44.092   23:50:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2907894
00:09:44.092   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2907894 ']'
00:09:44.092   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2907894
00:09:44.092    23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:09:44.092   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:44.092    23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2907894
00:09:44.092   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:44.092   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:44.092   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2907894'
00:09:44.092  killing process with pid 2907894
00:09:44.092   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2907894
00:09:44.092   23:50:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2907894
00:09:44.351  
00:09:44.351  real	0m0.994s
00:09:44.351  user	0m0.966s
00:09:44.351  sys	0m0.427s
00:09:44.351   23:51:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:44.351   23:51:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:44.351  ************************************
00:09:44.351  END TEST default_locks_via_rpc
00:09:44.351  ************************************
00:09:44.351   23:51:00 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:09:44.351   23:51:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:44.351   23:51:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:44.351   23:51:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:44.609  ************************************
00:09:44.609  START TEST non_locking_app_on_locked_coremask
00:09:44.609  ************************************
00:09:44.609   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:09:44.609   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2908168
00:09:44.609   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2908168 /var/tmp/spdk.sock
00:09:44.609   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:09:44.609   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2908168 ']'
00:09:44.609   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:44.609   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:44.609   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:44.609  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:44.609   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:44.609   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:44.609  [2024-12-09 23:51:00.279086] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:44.609  [2024-12-09 23:51:00.279126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2908168 ]
00:09:44.609  [2024-12-09 23:51:00.352710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:44.610  [2024-12-09 23:51:00.394578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:44.869   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:44.869   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:09:44.869   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2908193
00:09:44.869   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2908193 /var/tmp/spdk2.sock
00:09:44.869   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:09:44.869   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2908193 ']'
00:09:44.869   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:44.869   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:44.869   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:44.869  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:44.869   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:44.869   23:51:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:44.869  [2024-12-09 23:51:00.673079] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:44.869  [2024-12-09 23:51:00.673130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2908193 ]
00:09:45.128  [2024-12-09 23:51:00.759082] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:45.128  [2024-12-09 23:51:00.759109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:45.128  [2024-12-09 23:51:00.845433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:45.696   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:45.696   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:09:45.696   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2908168
00:09:45.696   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2908168
00:09:45.696   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:46.264  lslocks: write error
00:09:46.264   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2908168
00:09:46.264   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2908168 ']'
00:09:46.264   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2908168
00:09:46.264    23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:09:46.264   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:46.264    23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2908168
00:09:46.264   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:46.264   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:46.264   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2908168'
00:09:46.264  killing process with pid 2908168
00:09:46.264   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2908168
00:09:46.264   23:51:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2908168
00:09:46.832   23:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2908193
00:09:46.832   23:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2908193 ']'
00:09:46.832   23:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2908193
00:09:46.832    23:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:09:46.832   23:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:46.832    23:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2908193
00:09:46.832   23:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:46.832   23:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:46.832   23:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2908193'
00:09:46.832  killing process with pid 2908193
00:09:46.832   23:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2908193
00:09:46.832   23:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2908193
00:09:47.091  
00:09:47.091  real	0m2.680s
00:09:47.091  user	0m2.827s
00:09:47.091  sys	0m0.886s
00:09:47.091   23:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:47.091   23:51:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:47.091  ************************************
00:09:47.091  END TEST non_locking_app_on_locked_coremask
00:09:47.091  ************************************
00:09:47.091   23:51:02 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:09:47.091   23:51:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:47.091   23:51:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:47.091   23:51:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:47.350  ************************************
00:09:47.350  START TEST locking_app_on_unlocked_coremask
00:09:47.350  ************************************
00:09:47.350   23:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:09:47.350   23:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2908756
00:09:47.350   23:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2908756 /var/tmp/spdk.sock
00:09:47.350   23:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:09:47.350   23:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2908756 ']'
00:09:47.350   23:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:47.350   23:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:47.350   23:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:47.350  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:47.350   23:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:47.350   23:51:02 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:47.350  [2024-12-09 23:51:03.030725] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:47.350  [2024-12-09 23:51:03.030766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2908756 ]
00:09:47.350  [2024-12-09 23:51:03.103686] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:47.350  [2024-12-09 23:51:03.103710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:47.350  [2024-12-09 23:51:03.143947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:47.609   23:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:47.609   23:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:09:47.609   23:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2908763
00:09:47.609   23:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2908763 /var/tmp/spdk2.sock
00:09:47.609   23:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:09:47.609   23:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2908763 ']'
00:09:47.609   23:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:47.609   23:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:47.609   23:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:47.609  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:47.609   23:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:47.609   23:51:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:47.609  [2024-12-09 23:51:03.414585] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:47.609  [2024-12-09 23:51:03.414632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2908763 ]
00:09:47.868  [2024-12-09 23:51:03.500980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:47.868  [2024-12-09 23:51:03.576356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:48.436   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:48.436   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:09:48.436   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2908763
00:09:48.436   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2908763
00:09:48.436   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:49.004  lslocks: write error
00:09:49.004   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2908756
00:09:49.004   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2908756 ']'
00:09:49.004   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2908756
00:09:49.004    23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:09:49.004   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:49.004    23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2908756
00:09:49.004   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:49.004   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:49.004   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2908756'
00:09:49.004  killing process with pid 2908756
00:09:49.004   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2908756
00:09:49.004   23:51:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2908756
00:09:49.570   23:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2908763
00:09:49.570   23:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2908763 ']'
00:09:49.570   23:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2908763
00:09:49.570    23:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:09:49.570   23:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:49.570    23:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2908763
00:09:49.828   23:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:49.828   23:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:49.828   23:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2908763'
00:09:49.828  killing process with pid 2908763
00:09:49.828   23:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2908763
00:09:49.828   23:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2908763
00:09:50.087  
00:09:50.087  real	0m2.752s
00:09:50.087  user	0m2.916s
00:09:50.087  sys	0m0.915s
00:09:50.087   23:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:50.087   23:51:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:50.087  ************************************
00:09:50.087  END TEST locking_app_on_unlocked_coremask
00:09:50.087  ************************************
00:09:50.087   23:51:05 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:09:50.087   23:51:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:50.087   23:51:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:50.087   23:51:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:50.087  ************************************
00:09:50.087  START TEST locking_app_on_locked_coremask
00:09:50.087  ************************************
00:09:50.087   23:51:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:09:50.087   23:51:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2909241
00:09:50.087   23:51:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2909241 /var/tmp/spdk.sock
00:09:50.087   23:51:05 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1
00:09:50.087   23:51:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2909241 ']'
00:09:50.087   23:51:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:50.087   23:51:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:50.087   23:51:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:50.087  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:50.087   23:51:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:50.087   23:51:05 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:50.087  [2024-12-09 23:51:05.858096] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:50.087  [2024-12-09 23:51:05.858139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909241 ]
00:09:50.087  [2024-12-09 23:51:05.931343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:50.346  [2024-12-09 23:51:05.972507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2909410
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2909410 /var/tmp/spdk2.sock
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2909410 /var/tmp/spdk2.sock
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:50.346    23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2909410 /var/tmp/spdk2.sock
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2909410 ']'
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:50.346  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:50.346   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:50.605  [2024-12-09 23:51:06.241338] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:50.605  [2024-12-09 23:51:06.241384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909410 ]
00:09:50.605  [2024-12-09 23:51:06.328835] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2909241 has claimed it.
00:09:50.605  [2024-12-09 23:51:06.328870] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:09:51.172  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2909410) - No such process
00:09:51.172  ERROR: process (pid: 2909410) is no longer running
00:09:51.172   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:51.172   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:09:51.172   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:09:51.172   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:51.172   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:51.172   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:51.172   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2909241
00:09:51.172   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2909241
00:09:51.172   23:51:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:51.431  lslocks: write error
00:09:51.431   23:51:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2909241
00:09:51.431   23:51:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2909241 ']'
00:09:51.431   23:51:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2909241
00:09:51.431    23:51:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:09:51.431   23:51:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:51.431    23:51:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2909241
00:09:51.431   23:51:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:51.431   23:51:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:51.431   23:51:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2909241'
00:09:51.431  killing process with pid 2909241
00:09:51.431   23:51:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2909241
00:09:51.431   23:51:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2909241
00:09:51.999  
00:09:51.999  real	0m1.757s
00:09:51.999  user	0m1.881s
00:09:51.999  sys	0m0.582s
00:09:51.999   23:51:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:51.999   23:51:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:51.999  ************************************
00:09:51.999  END TEST locking_app_on_locked_coremask
00:09:51.999  ************************************
00:09:51.999   23:51:07 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:09:51.999   23:51:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:51.999   23:51:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:51.999   23:51:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:51.999  ************************************
00:09:51.999  START TEST locking_overlapped_coremask
00:09:51.999  ************************************
00:09:51.999   23:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:09:51.999   23:51:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2909974
00:09:51.999   23:51:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2909974 /var/tmp/spdk.sock
00:09:51.999   23:51:07 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7
00:09:51.999   23:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2909974 ']'
00:09:51.999   23:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:51.999   23:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:51.999   23:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:51.999  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:51.999   23:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:51.999   23:51:07 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:51.999  [2024-12-09 23:51:07.682781] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:51.999  [2024-12-09 23:51:07.682826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2909974 ]
00:09:51.999  [2024-12-09 23:51:07.770083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:51.999  [2024-12-09 23:51:07.812557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:51.999  [2024-12-09 23:51:07.812663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:51.999  [2024-12-09 23:51:07.812664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2910118
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2910118 /var/tmp/spdk2.sock
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2910118 /var/tmp/spdk2.sock
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:52.935    23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2910118 /var/tmp/spdk2.sock
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2910118 ']'
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:52.935  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:52.935   23:51:08 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:52.935  [2024-12-09 23:51:08.571737] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:52.935  [2024-12-09 23:51:08.571786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910118 ]
00:09:52.935  [2024-12-09 23:51:08.662273] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2909974 has claimed it.
00:09:52.935  [2024-12-09 23:51:08.662309] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:09:53.503  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2910118) - No such process
00:09:53.503  ERROR: process (pid: 2910118) is no longer running
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2909974
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2909974 ']'
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2909974
00:09:53.503    23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:53.503    23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2909974
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2909974'
00:09:53.503  killing process with pid 2909974
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2909974
00:09:53.503   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2909974
00:09:53.762  
00:09:53.762  real	0m1.927s
00:09:53.762  user	0m5.536s
00:09:53.762  sys	0m0.433s
00:09:53.762   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:53.762   23:51:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:53.763  ************************************
00:09:53.763  END TEST locking_overlapped_coremask
00:09:53.763  ************************************
00:09:53.763   23:51:09 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:09:53.763   23:51:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:53.763   23:51:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:53.763   23:51:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:54.022  ************************************
00:09:54.022  START TEST locking_overlapped_coremask_via_rpc
00:09:54.022  ************************************
00:09:54.022   23:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:09:54.022   23:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2910368
00:09:54.022   23:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2910368 /var/tmp/spdk.sock
00:09:54.022   23:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:09:54.022   23:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2910368 ']'
00:09:54.022   23:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:54.022   23:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:54.022   23:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:54.022  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:54.022   23:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:54.022   23:51:09 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:54.022  [2024-12-09 23:51:09.677872] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:54.022  [2024-12-09 23:51:09.677914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910368 ]
00:09:54.023  [2024-12-09 23:51:09.751571] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:54.023  [2024-12-09 23:51:09.751600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:54.023  [2024-12-09 23:51:09.789864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:54.023  [2024-12-09 23:51:09.789973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:54.023  [2024-12-09 23:51:09.789973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:54.281   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:54.281   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:54.281   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2910386
00:09:54.281   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2910386 /var/tmp/spdk2.sock
00:09:54.281   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:09:54.281   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2910386 ']'
00:09:54.281   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:54.281   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:54.281   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:54.281  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:54.281   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:54.281   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:54.281  [2024-12-09 23:51:10.060381] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:54.281  [2024-12-09 23:51:10.060446] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910386 ]
00:09:54.540  [2024-12-09 23:51:10.153619] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:54.540  [2024-12-09 23:51:10.153649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:54.540  [2024-12-09 23:51:10.236068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:54.540  [2024-12-09 23:51:10.239206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:54.540  [2024-12-09 23:51:10.239207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:55.137    23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:55.137  [2024-12-09 23:51:10.927242] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2910368 has claimed it.
00:09:55.137  request:
00:09:55.137  {
00:09:55.137  "method": "framework_enable_cpumask_locks",
00:09:55.137  "req_id": 1
00:09:55.137  }
00:09:55.137  Got JSON-RPC error response
00:09:55.137  response:
00:09:55.137  {
00:09:55.137  "code": -32603,
00:09:55.137  "message": "Failed to claim CPU core: 2"
00:09:55.137  }
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2910368 /var/tmp/spdk.sock
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2910368 ']'
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:55.137  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:55.137   23:51:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:55.396   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:55.396   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:55.396   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2910386 /var/tmp/spdk2.sock
00:09:55.396   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2910386 ']'
00:09:55.396   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:55.396   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:55.396   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:55.396  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:55.396   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:55.396   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:55.655   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:55.656   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:55.656   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:09:55.656   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:09:55.656   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:09:55.656   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:09:55.656  
00:09:55.656  real	0m1.716s
00:09:55.656  user	0m0.824s
00:09:55.656  sys	0m0.148s
00:09:55.656   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:55.656   23:51:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:55.656  ************************************
00:09:55.656  END TEST locking_overlapped_coremask_via_rpc
00:09:55.656  ************************************
00:09:55.656   23:51:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:09:55.656   23:51:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2910368 ]]
00:09:55.656   23:51:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2910368
00:09:55.656   23:51:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2910368 ']'
00:09:55.656   23:51:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2910368
00:09:55.656    23:51:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:09:55.656   23:51:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:55.656    23:51:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2910368
00:09:55.656   23:51:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:55.656   23:51:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:55.656   23:51:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2910368'
00:09:55.656  killing process with pid 2910368
00:09:55.656   23:51:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2910368
00:09:55.656   23:51:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2910368
00:09:55.914   23:51:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2910386 ]]
00:09:55.914   23:51:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2910386
00:09:55.914   23:51:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2910386 ']'
00:09:55.914   23:51:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2910386
00:09:55.914    23:51:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:09:55.914   23:51:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:55.914    23:51:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2910386
00:09:56.173   23:51:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:09:56.173   23:51:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:09:56.174   23:51:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2910386'
00:09:56.174  killing process with pid 2910386
00:09:56.174   23:51:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2910386
00:09:56.174   23:51:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2910386
00:09:56.433   23:51:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:09:56.433   23:51:12 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:09:56.433   23:51:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2910368 ]]
00:09:56.433   23:51:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2910368
00:09:56.433   23:51:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2910368 ']'
00:09:56.433   23:51:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2910368
00:09:56.433  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2910368) - No such process
00:09:56.433   23:51:12 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2910368 is not found'
00:09:56.433  Process with pid 2910368 is not found
00:09:56.433   23:51:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2910386 ]]
00:09:56.433   23:51:12 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2910386
00:09:56.433   23:51:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2910386 ']'
00:09:56.433   23:51:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2910386
00:09:56.433  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2910386) - No such process
00:09:56.433   23:51:12 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2910386 is not found'
00:09:56.433  Process with pid 2910386 is not found
00:09:56.433   23:51:12 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:09:56.433  
00:09:56.433  real	0m14.145s
00:09:56.433  user	0m25.634s
00:09:56.433  sys	0m4.782s
00:09:56.433   23:51:12 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:56.433   23:51:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:56.433  ************************************
00:09:56.433  END TEST cpu_locks
00:09:56.433  ************************************
00:09:56.433  
00:09:56.433  real	0m38.420s
00:09:56.433  user	1m13.943s
00:09:56.433  sys	0m8.232s
00:09:56.433   23:51:12 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:56.433   23:51:12 event -- common/autotest_common.sh@10 -- # set +x
00:09:56.433  ************************************
00:09:56.433  END TEST event
00:09:56.433  ************************************
00:09:56.433   23:51:12  -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh
00:09:56.433   23:51:12  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:56.433   23:51:12  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:56.433   23:51:12  -- common/autotest_common.sh@10 -- # set +x
00:09:56.433  ************************************
00:09:56.433  START TEST thread
00:09:56.433  ************************************
00:09:56.433   23:51:12 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh
00:09:56.692  * Looking for test storage...
00:09:56.692  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread
00:09:56.692    23:51:12 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:56.692     23:51:12 thread -- common/autotest_common.sh@1711 -- # lcov --version
00:09:56.692     23:51:12 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:56.692    23:51:12 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:56.692    23:51:12 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:56.692    23:51:12 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:56.692    23:51:12 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:56.692    23:51:12 thread -- scripts/common.sh@336 -- # IFS=.-:
00:09:56.692    23:51:12 thread -- scripts/common.sh@336 -- # read -ra ver1
00:09:56.692    23:51:12 thread -- scripts/common.sh@337 -- # IFS=.-:
00:09:56.692    23:51:12 thread -- scripts/common.sh@337 -- # read -ra ver2
00:09:56.692    23:51:12 thread -- scripts/common.sh@338 -- # local 'op=<'
00:09:56.692    23:51:12 thread -- scripts/common.sh@340 -- # ver1_l=2
00:09:56.692    23:51:12 thread -- scripts/common.sh@341 -- # ver2_l=1
00:09:56.692    23:51:12 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:56.692    23:51:12 thread -- scripts/common.sh@344 -- # case "$op" in
00:09:56.692    23:51:12 thread -- scripts/common.sh@345 -- # : 1
00:09:56.692    23:51:12 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:56.692    23:51:12 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:56.692     23:51:12 thread -- scripts/common.sh@365 -- # decimal 1
00:09:56.692     23:51:12 thread -- scripts/common.sh@353 -- # local d=1
00:09:56.692     23:51:12 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:56.692     23:51:12 thread -- scripts/common.sh@355 -- # echo 1
00:09:56.692    23:51:12 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:09:56.692     23:51:12 thread -- scripts/common.sh@366 -- # decimal 2
00:09:56.692     23:51:12 thread -- scripts/common.sh@353 -- # local d=2
00:09:56.692     23:51:12 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:56.692     23:51:12 thread -- scripts/common.sh@355 -- # echo 2
00:09:56.692    23:51:12 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:09:56.692    23:51:12 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:56.692    23:51:12 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:56.692    23:51:12 thread -- scripts/common.sh@368 -- # return 0
00:09:56.692    23:51:12 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:56.692    23:51:12 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:56.692  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:56.692  		--rc genhtml_branch_coverage=1
00:09:56.692  		--rc genhtml_function_coverage=1
00:09:56.692  		--rc genhtml_legend=1
00:09:56.692  		--rc geninfo_all_blocks=1
00:09:56.692  		--rc geninfo_unexecuted_blocks=1
00:09:56.692  		
00:09:56.692  		'
00:09:56.692    23:51:12 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:56.692  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:56.692  		--rc genhtml_branch_coverage=1
00:09:56.692  		--rc genhtml_function_coverage=1
00:09:56.692  		--rc genhtml_legend=1
00:09:56.692  		--rc geninfo_all_blocks=1
00:09:56.692  		--rc geninfo_unexecuted_blocks=1
00:09:56.692  		
00:09:56.692  		'
00:09:56.692    23:51:12 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:56.692  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:56.692  		--rc genhtml_branch_coverage=1
00:09:56.692  		--rc genhtml_function_coverage=1
00:09:56.693  		--rc genhtml_legend=1
00:09:56.693  		--rc geninfo_all_blocks=1
00:09:56.693  		--rc geninfo_unexecuted_blocks=1
00:09:56.693  		
00:09:56.693  		'
00:09:56.693    23:51:12 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:56.693  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:56.693  		--rc genhtml_branch_coverage=1
00:09:56.693  		--rc genhtml_function_coverage=1
00:09:56.693  		--rc genhtml_legend=1
00:09:56.693  		--rc geninfo_all_blocks=1
00:09:56.693  		--rc geninfo_unexecuted_blocks=1
00:09:56.693  		
00:09:56.693  		'
00:09:56.693   23:51:12 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:09:56.693   23:51:12 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:09:56.693   23:51:12 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:56.693   23:51:12 thread -- common/autotest_common.sh@10 -- # set +x
00:09:56.693  ************************************
00:09:56.693  START TEST thread_poller_perf
00:09:56.693  ************************************
00:09:56.693   23:51:12 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:09:56.693  [2024-12-09 23:51:12.447334] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:56.693  [2024-12-09 23:51:12.447403] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2910934 ]
00:09:56.693  [2024-12-09 23:51:12.524570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:56.951  [2024-12-09 23:51:12.563772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:56.951  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:09:57.887  
[2024-12-09T22:51:13.744Z]  ======================================
00:09:57.887  
[2024-12-09T22:51:13.744Z]  busy:2108261126 (cyc)
00:09:57.887  
[2024-12-09T22:51:13.744Z]  total_run_count: 425000
00:09:57.887  
[2024-12-09T22:51:13.744Z]  tsc_hz: 2100000000 (cyc)
00:09:57.887  
[2024-12-09T22:51:13.744Z]  ======================================
00:09:57.887  
[2024-12-09T22:51:13.744Z]  poller_cost: 4960 (cyc), 2361 (nsec)
00:09:57.887  
00:09:57.887  real	0m1.180s
00:09:57.887  user	0m1.098s
00:09:57.887  sys	0m0.078s
00:09:57.887   23:51:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:57.887   23:51:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:09:57.887  ************************************
00:09:57.887  END TEST thread_poller_perf
00:09:57.887  ************************************
00:09:57.887   23:51:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:09:57.887   23:51:13 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:09:57.887   23:51:13 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:57.887   23:51:13 thread -- common/autotest_common.sh@10 -- # set +x
00:09:57.887  ************************************
00:09:57.887  START TEST thread_poller_perf
00:09:57.887  ************************************
00:09:57.887   23:51:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:09:57.887  [2024-12-09 23:51:13.697655] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:57.887  [2024-12-09 23:51:13.697722] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911175 ]
00:09:58.146  [2024-12-09 23:51:13.775554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:58.146  [2024-12-09 23:51:13.813297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:58.146  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:09:59.083  
[2024-12-09T22:51:14.940Z]  ======================================
00:09:59.083  
[2024-12-09T22:51:14.940Z]  busy:2101451490 (cyc)
00:09:59.083  
[2024-12-09T22:51:14.940Z]  total_run_count: 5069000
00:09:59.083  
[2024-12-09T22:51:14.940Z]  tsc_hz: 2100000000 (cyc)
00:09:59.083  
[2024-12-09T22:51:14.940Z]  ======================================
00:09:59.083  
[2024-12-09T22:51:14.940Z]  poller_cost: 414 (cyc), 197 (nsec)
00:09:59.083  
00:09:59.083  real	0m1.175s
00:09:59.083  user	0m1.093s
00:09:59.083  sys	0m0.078s
00:09:59.083   23:51:14 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:59.083   23:51:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:09:59.083  ************************************
00:09:59.083  END TEST thread_poller_perf
00:09:59.083  ************************************
00:09:59.083   23:51:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]]
00:09:59.083  
00:09:59.083  real	0m2.668s
00:09:59.083  user	0m2.359s
00:09:59.083  sys	0m0.322s
00:09:59.083   23:51:14 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:59.083   23:51:14 thread -- common/autotest_common.sh@10 -- # set +x
00:09:59.083  ************************************
00:09:59.083  END TEST thread
00:09:59.083  ************************************
00:09:59.083   23:51:14  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:09:59.083   23:51:14  -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh
00:09:59.083   23:51:14  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:59.083   23:51:14  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:59.083   23:51:14  -- common/autotest_common.sh@10 -- # set +x
00:09:59.342  ************************************
00:09:59.342  START TEST app_cmdline
00:09:59.342  ************************************
00:09:59.342   23:51:14 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh
00:09:59.342  * Looking for test storage...
00:09:59.342  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app
00:09:59.342    23:51:15 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:09:59.342     23:51:15 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version
00:09:59.342     23:51:15 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:09:59.342    23:51:15 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@345 -- # : 1
00:09:59.342    23:51:15 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:59.343    23:51:15 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:59.343     23:51:15 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:09:59.343     23:51:15 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:09:59.343     23:51:15 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:59.343     23:51:15 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:09:59.343    23:51:15 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:09:59.343     23:51:15 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:09:59.343     23:51:15 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:09:59.343     23:51:15 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:59.343     23:51:15 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:09:59.343    23:51:15 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:09:59.343    23:51:15 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:59.343    23:51:15 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:59.343    23:51:15 app_cmdline -- scripts/common.sh@368 -- # return 0
00:09:59.343    23:51:15 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:59.343    23:51:15 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:09:59.343  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:59.343  		--rc genhtml_branch_coverage=1
00:09:59.343  		--rc genhtml_function_coverage=1
00:09:59.343  		--rc genhtml_legend=1
00:09:59.343  		--rc geninfo_all_blocks=1
00:09:59.343  		--rc geninfo_unexecuted_blocks=1
00:09:59.343  		
00:09:59.343  		'
00:09:59.343    23:51:15 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:09:59.343  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:59.343  		--rc genhtml_branch_coverage=1
00:09:59.343  		--rc genhtml_function_coverage=1
00:09:59.343  		--rc genhtml_legend=1
00:09:59.343  		--rc geninfo_all_blocks=1
00:09:59.343  		--rc geninfo_unexecuted_blocks=1
00:09:59.343  		
00:09:59.343  		'
00:09:59.343    23:51:15 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:09:59.343  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:59.343  		--rc genhtml_branch_coverage=1
00:09:59.343  		--rc genhtml_function_coverage=1
00:09:59.343  		--rc genhtml_legend=1
00:09:59.343  		--rc geninfo_all_blocks=1
00:09:59.343  		--rc geninfo_unexecuted_blocks=1
00:09:59.343  		
00:09:59.343  		'
00:09:59.343    23:51:15 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:09:59.343  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:59.343  		--rc genhtml_branch_coverage=1
00:09:59.343  		--rc genhtml_function_coverage=1
00:09:59.343  		--rc genhtml_legend=1
00:09:59.343  		--rc geninfo_all_blocks=1
00:09:59.343  		--rc geninfo_unexecuted_blocks=1
00:09:59.343  		
00:09:59.343  		'
00:09:59.343   23:51:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:09:59.343   23:51:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2911472
00:09:59.343   23:51:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2911472
00:09:59.343   23:51:15 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:09:59.343   23:51:15 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2911472 ']'
00:09:59.343   23:51:15 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:59.343   23:51:15 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:59.343   23:51:15 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:59.343  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:59.343   23:51:15 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:59.343   23:51:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:09:59.343  [2024-12-09 23:51:15.187597] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:09:59.343  [2024-12-09 23:51:15.187647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2911472 ]
00:09:59.601  [2024-12-09 23:51:15.262657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:59.601  [2024-12-09 23:51:15.303119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:59.859   23:51:15 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:59.859   23:51:15 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:09:59.859   23:51:15 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version
00:09:59.859  {
00:09:59.859    "version": "SPDK v25.01-pre git sha1 06358c250",
00:09:59.859    "fields": {
00:09:59.859      "major": 25,
00:09:59.859      "minor": 1,
00:09:59.859      "patch": 0,
00:09:59.859      "suffix": "-pre",
00:09:59.859      "commit": "06358c250"
00:09:59.859    }
00:09:59.859  }
00:10:00.118   23:51:15 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:10:00.118   23:51:15 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:10:00.118   23:51:15 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:10:00.118   23:51:15 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:10:00.118    23:51:15 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:10:00.118    23:51:15 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:10:00.118    23:51:15 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:00.118    23:51:15 app_cmdline -- app/cmdline.sh@26 -- # sort
00:10:00.118    23:51:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:10:00.118    23:51:15 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:00.118   23:51:15 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:10:00.118   23:51:15 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:10:00.118   23:51:15 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:10:00.118   23:51:15 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:10:00.118   23:51:15 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:10:00.118   23:51:15 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:10:00.118   23:51:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:00.118    23:51:15 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:10:00.118   23:51:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:00.118    23:51:15 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:10:00.118   23:51:15 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:00.118   23:51:15 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:10:00.118   23:51:15 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:10:00.118   23:51:15 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:10:00.118  request:
00:10:00.118  {
00:10:00.118    "method": "env_dpdk_get_mem_stats",
00:10:00.118    "req_id": 1
00:10:00.118  }
00:10:00.119  Got JSON-RPC error response
00:10:00.119  response:
00:10:00.119  {
00:10:00.119    "code": -32601,
00:10:00.119    "message": "Method not found"
00:10:00.119  }
00:10:00.119   23:51:15 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:10:00.119   23:51:15 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:10:00.119   23:51:15 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:10:00.119   23:51:15 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:10:00.119   23:51:15 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2911472
00:10:00.119   23:51:15 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2911472 ']'
00:10:00.119   23:51:15 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2911472
00:10:00.119    23:51:15 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:10:00.119   23:51:15 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:00.119    23:51:15 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2911472
00:10:00.377   23:51:16 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:00.377   23:51:16 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:00.377   23:51:16 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2911472'
00:10:00.377  killing process with pid 2911472
00:10:00.377   23:51:16 app_cmdline -- common/autotest_common.sh@973 -- # kill 2911472
00:10:00.377   23:51:16 app_cmdline -- common/autotest_common.sh@978 -- # wait 2911472
00:10:00.636  
00:10:00.636  real	0m1.345s
00:10:00.636  user	0m1.565s
00:10:00.636  sys	0m0.455s
00:10:00.636   23:51:16 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:00.636   23:51:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:10:00.636  ************************************
00:10:00.636  END TEST app_cmdline
00:10:00.636  ************************************
00:10:00.636   23:51:16  -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh
00:10:00.636   23:51:16  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:00.636   23:51:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:00.636   23:51:16  -- common/autotest_common.sh@10 -- # set +x
00:10:00.636  ************************************
00:10:00.636  START TEST version
00:10:00.636  ************************************
00:10:00.636   23:51:16 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh
00:10:00.636  * Looking for test storage...
00:10:00.636  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app
00:10:00.636    23:51:16 version -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:00.636     23:51:16 version -- common/autotest_common.sh@1711 -- # lcov --version
00:10:00.636     23:51:16 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:00.895    23:51:16 version -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:00.895    23:51:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:00.895    23:51:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:00.895    23:51:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:00.895    23:51:16 version -- scripts/common.sh@336 -- # IFS=.-:
00:10:00.895    23:51:16 version -- scripts/common.sh@336 -- # read -ra ver1
00:10:00.895    23:51:16 version -- scripts/common.sh@337 -- # IFS=.-:
00:10:00.895    23:51:16 version -- scripts/common.sh@337 -- # read -ra ver2
00:10:00.895    23:51:16 version -- scripts/common.sh@338 -- # local 'op=<'
00:10:00.895    23:51:16 version -- scripts/common.sh@340 -- # ver1_l=2
00:10:00.895    23:51:16 version -- scripts/common.sh@341 -- # ver2_l=1
00:10:00.895    23:51:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:00.895    23:51:16 version -- scripts/common.sh@344 -- # case "$op" in
00:10:00.895    23:51:16 version -- scripts/common.sh@345 -- # : 1
00:10:00.895    23:51:16 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:00.895    23:51:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:00.895     23:51:16 version -- scripts/common.sh@365 -- # decimal 1
00:10:00.895     23:51:16 version -- scripts/common.sh@353 -- # local d=1
00:10:00.895     23:51:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:00.895     23:51:16 version -- scripts/common.sh@355 -- # echo 1
00:10:00.895    23:51:16 version -- scripts/common.sh@365 -- # ver1[v]=1
00:10:00.895     23:51:16 version -- scripts/common.sh@366 -- # decimal 2
00:10:00.895     23:51:16 version -- scripts/common.sh@353 -- # local d=2
00:10:00.895     23:51:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:00.895     23:51:16 version -- scripts/common.sh@355 -- # echo 2
00:10:00.895    23:51:16 version -- scripts/common.sh@366 -- # ver2[v]=2
00:10:00.895    23:51:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:00.895    23:51:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:00.895    23:51:16 version -- scripts/common.sh@368 -- # return 0
00:10:00.895    23:51:16 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:00.895    23:51:16 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:00.895  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:00.895  		--rc genhtml_branch_coverage=1
00:10:00.895  		--rc genhtml_function_coverage=1
00:10:00.895  		--rc genhtml_legend=1
00:10:00.895  		--rc geninfo_all_blocks=1
00:10:00.895  		--rc geninfo_unexecuted_blocks=1
00:10:00.895  		
00:10:00.895  		'
00:10:00.895    23:51:16 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:00.895  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:00.895  		--rc genhtml_branch_coverage=1
00:10:00.896  		--rc genhtml_function_coverage=1
00:10:00.896  		--rc genhtml_legend=1
00:10:00.896  		--rc geninfo_all_blocks=1
00:10:00.896  		--rc geninfo_unexecuted_blocks=1
00:10:00.896  		
00:10:00.896  		'
00:10:00.896    23:51:16 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:00.896  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:00.896  		--rc genhtml_branch_coverage=1
00:10:00.896  		--rc genhtml_function_coverage=1
00:10:00.896  		--rc genhtml_legend=1
00:10:00.896  		--rc geninfo_all_blocks=1
00:10:00.896  		--rc geninfo_unexecuted_blocks=1
00:10:00.896  		
00:10:00.896  		'
00:10:00.896    23:51:16 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:00.896  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:00.896  		--rc genhtml_branch_coverage=1
00:10:00.896  		--rc genhtml_function_coverage=1
00:10:00.896  		--rc genhtml_legend=1
00:10:00.896  		--rc geninfo_all_blocks=1
00:10:00.896  		--rc geninfo_unexecuted_blocks=1
00:10:00.896  		
00:10:00.896  		'
00:10:00.896    23:51:16 version -- app/version.sh@17 -- # get_header_version major
00:10:00.896    23:51:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h
00:10:00.896    23:51:16 version -- app/version.sh@14 -- # cut -f2
00:10:00.896    23:51:16 version -- app/version.sh@14 -- # tr -d '"'
00:10:00.896   23:51:16 version -- app/version.sh@17 -- # major=25
00:10:00.896    23:51:16 version -- app/version.sh@18 -- # get_header_version minor
00:10:00.896    23:51:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h
00:10:00.896    23:51:16 version -- app/version.sh@14 -- # cut -f2
00:10:00.896    23:51:16 version -- app/version.sh@14 -- # tr -d '"'
00:10:00.896   23:51:16 version -- app/version.sh@18 -- # minor=1
00:10:00.896    23:51:16 version -- app/version.sh@19 -- # get_header_version patch
00:10:00.896    23:51:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h
00:10:00.896    23:51:16 version -- app/version.sh@14 -- # cut -f2
00:10:00.896    23:51:16 version -- app/version.sh@14 -- # tr -d '"'
00:10:00.896   23:51:16 version -- app/version.sh@19 -- # patch=0
00:10:00.896    23:51:16 version -- app/version.sh@20 -- # get_header_version suffix
00:10:00.896    23:51:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h
00:10:00.896    23:51:16 version -- app/version.sh@14 -- # cut -f2
00:10:00.896    23:51:16 version -- app/version.sh@14 -- # tr -d '"'
00:10:00.896   23:51:16 version -- app/version.sh@20 -- # suffix=-pre
00:10:00.896   23:51:16 version -- app/version.sh@22 -- # version=25.1
00:10:00.896   23:51:16 version -- app/version.sh@25 -- # (( patch != 0 ))
00:10:00.896   23:51:16 version -- app/version.sh@28 -- # version=25.1rc0
00:10:00.896   23:51:16 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python
00:10:00.896    23:51:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:10:00.896   23:51:16 version -- app/version.sh@30 -- # py_version=25.1rc0
00:10:00.896   23:51:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:10:00.896  
00:10:00.896  real	0m0.245s
00:10:00.896  user	0m0.160s
00:10:00.896  sys	0m0.128s
00:10:00.896   23:51:16 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:00.896   23:51:16 version -- common/autotest_common.sh@10 -- # set +x
00:10:00.896  ************************************
00:10:00.896  END TEST version
00:10:00.896  ************************************
00:10:00.896   23:51:16  -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']'
00:10:00.896   23:51:16  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:10:00.896    23:51:16  -- spdk/autotest.sh@194 -- # uname -s
00:10:00.896   23:51:16  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:10:00.896   23:51:16  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:10:00.896   23:51:16  -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]]
00:10:00.896   23:51:16  -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']'
00:10:00.896   23:51:16  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:10:00.896   23:51:16  -- spdk/autotest.sh@260 -- # timing_exit lib
00:10:00.896   23:51:16  -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:00.896   23:51:16  -- common/autotest_common.sh@10 -- # set +x
00:10:00.896   23:51:16  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:10:00.896   23:51:16  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:10:00.896   23:51:16  -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']'
00:10:00.896   23:51:16  -- spdk/autotest.sh@277 -- # export NET_TYPE
00:10:00.896   23:51:16  -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']'
00:10:00.896   23:51:16  -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']'
00:10:00.896   23:51:16  -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp
00:10:00.896   23:51:16  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:00.896   23:51:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:00.896   23:51:16  -- common/autotest_common.sh@10 -- # set +x
00:10:00.896  ************************************
00:10:00.896  START TEST nvmf_tcp
00:10:00.896  ************************************
00:10:00.896   23:51:16 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp
00:10:01.156  * Looking for test storage...
00:10:01.156  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf
00:10:01.156    23:51:16 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:01.156     23:51:16 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:10:01.156     23:51:16 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:01.156    23:51:16 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@345 -- # : 1
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:01.156     23:51:16 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1
00:10:01.156     23:51:16 nvmf_tcp -- scripts/common.sh@353 -- # local d=1
00:10:01.156     23:51:16 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:01.156     23:51:16 nvmf_tcp -- scripts/common.sh@355 -- # echo 1
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:10:01.156     23:51:16 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2
00:10:01.156     23:51:16 nvmf_tcp -- scripts/common.sh@353 -- # local d=2
00:10:01.156     23:51:16 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:01.156     23:51:16 nvmf_tcp -- scripts/common.sh@355 -- # echo 2
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:01.156    23:51:16 nvmf_tcp -- scripts/common.sh@368 -- # return 0
00:10:01.156    23:51:16 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:01.156    23:51:16 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:01.156  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.156  		--rc genhtml_branch_coverage=1
00:10:01.156  		--rc genhtml_function_coverage=1
00:10:01.156  		--rc genhtml_legend=1
00:10:01.156  		--rc geninfo_all_blocks=1
00:10:01.156  		--rc geninfo_unexecuted_blocks=1
00:10:01.156  		
00:10:01.156  		'
00:10:01.156    23:51:16 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:01.156  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.156  		--rc genhtml_branch_coverage=1
00:10:01.156  		--rc genhtml_function_coverage=1
00:10:01.156  		--rc genhtml_legend=1
00:10:01.156  		--rc geninfo_all_blocks=1
00:10:01.156  		--rc geninfo_unexecuted_blocks=1
00:10:01.156  		
00:10:01.156  		'
00:10:01.156    23:51:16 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:01.156  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.156  		--rc genhtml_branch_coverage=1
00:10:01.156  		--rc genhtml_function_coverage=1
00:10:01.156  		--rc genhtml_legend=1
00:10:01.156  		--rc geninfo_all_blocks=1
00:10:01.156  		--rc geninfo_unexecuted_blocks=1
00:10:01.156  		
00:10:01.156  		'
00:10:01.156    23:51:16 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:01.156  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.156  		--rc genhtml_branch_coverage=1
00:10:01.156  		--rc genhtml_function_coverage=1
00:10:01.156  		--rc genhtml_legend=1
00:10:01.156  		--rc geninfo_all_blocks=1
00:10:01.156  		--rc geninfo_unexecuted_blocks=1
00:10:01.156  		
00:10:01.156  		'
00:10:01.156    23:51:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s
00:10:01.156   23:51:16 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']'
00:10:01.156   23:51:16 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp
00:10:01.156   23:51:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:01.156   23:51:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:01.156   23:51:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:10:01.156  ************************************
00:10:01.156  START TEST nvmf_target_core
00:10:01.156  ************************************
00:10:01.156   23:51:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp
00:10:01.416  * Looking for test storage...
00:10:01.416  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:01.416     23:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version
00:10:01.416     23:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-:
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-:
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<'
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:01.416     23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1
00:10:01.416     23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1
00:10:01.416     23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:01.416     23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1
00:10:01.416    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1
00:10:01.416     23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2
00:10:01.416     23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2
00:10:01.417     23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:01.417     23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:01.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.417  		--rc genhtml_branch_coverage=1
00:10:01.417  		--rc genhtml_function_coverage=1
00:10:01.417  		--rc genhtml_legend=1
00:10:01.417  		--rc geninfo_all_blocks=1
00:10:01.417  		--rc geninfo_unexecuted_blocks=1
00:10:01.417  		
00:10:01.417  		'
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:01.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.417  		--rc genhtml_branch_coverage=1
00:10:01.417  		--rc genhtml_function_coverage=1
00:10:01.417  		--rc genhtml_legend=1
00:10:01.417  		--rc geninfo_all_blocks=1
00:10:01.417  		--rc geninfo_unexecuted_blocks=1
00:10:01.417  		
00:10:01.417  		'
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:01.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.417  		--rc genhtml_branch_coverage=1
00:10:01.417  		--rc genhtml_function_coverage=1
00:10:01.417  		--rc genhtml_legend=1
00:10:01.417  		--rc geninfo_all_blocks=1
00:10:01.417  		--rc geninfo_unexecuted_blocks=1
00:10:01.417  		
00:10:01.417  		'
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:01.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.417  		--rc genhtml_branch_coverage=1
00:10:01.417  		--rc genhtml_function_coverage=1
00:10:01.417  		--rc genhtml_legend=1
00:10:01.417  		--rc geninfo_all_blocks=1
00:10:01.417  		--rc geninfo_unexecuted_blocks=1
00:10:01.417  		
00:10:01.417  		'
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s
00:10:01.417   23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']'
00:10:01.417   23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:10:01.417     23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:01.417     23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:10:01.417     23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob
00:10:01.417     23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:01.417     23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:01.417     23:51:17 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:01.417      23:51:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:01.417      23:51:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:01.417      23:51:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:01.417      23:51:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH
00:10:01.417      23:51:17 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:01.417  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:01.417    23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:01.417   23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:10:01.417   23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@")
00:10:01.417   23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]]
00:10:01.417   23:51:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp
00:10:01.417   23:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:01.417   23:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:01.417   23:51:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:10:01.417  ************************************
00:10:01.417  START TEST nvmf_abort
00:10:01.417  ************************************
00:10:01.417   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp
00:10:01.417  * Looking for test storage...
00:10:01.678  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-:
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-:
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<'
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:01.678  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.678  		--rc genhtml_branch_coverage=1
00:10:01.678  		--rc genhtml_function_coverage=1
00:10:01.678  		--rc genhtml_legend=1
00:10:01.678  		--rc geninfo_all_blocks=1
00:10:01.678  		--rc geninfo_unexecuted_blocks=1
00:10:01.678  		
00:10:01.678  		'
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:01.678  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.678  		--rc genhtml_branch_coverage=1
00:10:01.678  		--rc genhtml_function_coverage=1
00:10:01.678  		--rc genhtml_legend=1
00:10:01.678  		--rc geninfo_all_blocks=1
00:10:01.678  		--rc geninfo_unexecuted_blocks=1
00:10:01.678  		
00:10:01.678  		'
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:01.678  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.678  		--rc genhtml_branch_coverage=1
00:10:01.678  		--rc genhtml_function_coverage=1
00:10:01.678  		--rc genhtml_legend=1
00:10:01.678  		--rc geninfo_all_blocks=1
00:10:01.678  		--rc geninfo_unexecuted_blocks=1
00:10:01.678  		
00:10:01.678  		'
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:01.678  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.678  		--rc genhtml_branch_coverage=1
00:10:01.678  		--rc genhtml_function_coverage=1
00:10:01.678  		--rc genhtml_legend=1
00:10:01.678  		--rc geninfo_all_blocks=1
00:10:01.678  		--rc geninfo_unexecuted_blocks=1
00:10:01.678  		
00:10:01.678  		'
00:10:01.678   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:01.678    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:01.678     23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:01.678      23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:01.678      23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:01.679      23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:01.679      23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH
00:10:01.679      23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:01.679    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0
00:10:01.679    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:01.679    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:01.679    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:01.679    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:01.679    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:01.679    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:01.679  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:01.679    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:01.679    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:01.679    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:01.679    23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable
00:10:01.679   23:51:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=()
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=()
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=()
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=()
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=()
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=()
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=()
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:10:08.249  Found 0000:af:00.0 (0x8086 - 0x159b)
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:10:08.249  Found 0000:af:00.1 (0x8086 - 0x159b)
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:10:08.249  Found net devices under 0000:af:00.0: cvl_0_0
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:10:08.249  Found net devices under 0000:af:00.1: cvl_0_1
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:10:08.249   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:10:08.250  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:10:08.250  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms
00:10:08.250  
00:10:08.250  --- 10.0.0.2 ping statistics ---
00:10:08.250  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:08.250  rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:10:08.250  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:10:08.250  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms
00:10:08.250  
00:10:08.250  --- 10.0.0.1 ping statistics ---
00:10:08.250  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:08.250  rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2915088
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2915088
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2915088 ']'
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:08.250  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:10:08.250  [2024-12-09 23:51:23.426398] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:10:08.250  [2024-12-09 23:51:23.426443] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:08.250  [2024-12-09 23:51:23.502447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:08.250  [2024-12-09 23:51:23.543222] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:08.250  [2024-12-09 23:51:23.543256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:08.250  [2024-12-09 23:51:23.543262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:08.250  [2024-12-09 23:51:23.543267] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:08.250  [2024-12-09 23:51:23.543272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:08.250  [2024-12-09 23:51:23.544618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:08.250  [2024-12-09 23:51:23.544723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:08.250  [2024-12-09 23:51:23.544724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:10:08.250  [2024-12-09 23:51:23.692916] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:10:08.250  Malloc0
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:10:08.250  Delay0
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:10:08.250  [2024-12-09 23:51:23.764690] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:08.250   23:51:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128
00:10:08.250  [2024-12-09 23:51:23.943291] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:10:10.784  Initializing NVMe Controllers
00:10:10.784  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0
00:10:10.784  controller IO queue size 128 less than required
00:10:10.784  Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver.
00:10:10.784  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0
00:10:10.784  Initialization complete. Launching workers.
00:10:10.784  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37489
00:10:10.784  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37550, failed to submit 62
00:10:10.784  	 success 37493, unsuccessful 57, failed 0
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:10:10.784  rmmod nvme_tcp
00:10:10.784  rmmod nvme_fabrics
00:10:10.784  rmmod nvme_keyring
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2915088 ']'
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2915088
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2915088 ']'
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2915088
00:10:10.784    23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:10.784    23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2915088
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2915088'
00:10:10.784  killing process with pid 2915088
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2915088
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2915088
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:10:10.784   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore
00:10:10.785   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:10:10.785   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns
00:10:10.785   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:10.785   23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:10.785    23:51:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:12.690   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:10:12.690  
00:10:12.690  real	0m11.211s
00:10:12.690  user	0m11.821s
00:10:12.690  sys	0m5.396s
00:10:12.690   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:12.690   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:10:12.690  ************************************
00:10:12.690  END TEST nvmf_abort
00:10:12.690  ************************************
00:10:12.690   23:51:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp
00:10:12.690   23:51:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:12.690   23:51:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:12.690   23:51:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:10:12.690  ************************************
00:10:12.690  START TEST nvmf_ns_hotplug_stress
00:10:12.690  ************************************
00:10:12.690   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp
00:10:12.949  * Looking for test storage...
00:10:12.949  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:10:12.949    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:10:12.949     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version
00:10:12.949     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:10:12.949    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:10:12.949    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:12.949    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:12.949    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:12.949    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-:
00:10:12.949    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1
00:10:12.949    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-:
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<'
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:10:12.950  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:12.950  		--rc genhtml_branch_coverage=1
00:10:12.950  		--rc genhtml_function_coverage=1
00:10:12.950  		--rc genhtml_legend=1
00:10:12.950  		--rc geninfo_all_blocks=1
00:10:12.950  		--rc geninfo_unexecuted_blocks=1
00:10:12.950  		
00:10:12.950  		'
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:10:12.950  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:12.950  		--rc genhtml_branch_coverage=1
00:10:12.950  		--rc genhtml_function_coverage=1
00:10:12.950  		--rc genhtml_legend=1
00:10:12.950  		--rc geninfo_all_blocks=1
00:10:12.950  		--rc geninfo_unexecuted_blocks=1
00:10:12.950  		
00:10:12.950  		'
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:10:12.950  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:12.950  		--rc genhtml_branch_coverage=1
00:10:12.950  		--rc genhtml_function_coverage=1
00:10:12.950  		--rc genhtml_legend=1
00:10:12.950  		--rc geninfo_all_blocks=1
00:10:12.950  		--rc geninfo_unexecuted_blocks=1
00:10:12.950  		
00:10:12.950  		'
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:10:12.950  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:12.950  		--rc genhtml_branch_coverage=1
00:10:12.950  		--rc genhtml_function_coverage=1
00:10:12.950  		--rc genhtml_legend=1
00:10:12.950  		--rc geninfo_all_blocks=1
00:10:12.950  		--rc geninfo_unexecuted_blocks=1
00:10:12.950  		
00:10:12.950  		'
00:10:12.950   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:10:12.950    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:10:12.950     23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:10:12.950      23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:12.950      23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:12.950      23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:12.950      23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH
00:10:12.951      23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:10:12.951    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0
00:10:12.951    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:10:12.951    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:10:12.951    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:10:12.951    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:10:12.951    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:10:12.951    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:10:12.951  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:10:12.951    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:10:12.951    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:10:12.951    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0
00:10:12.951   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:10:12.951   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit
00:10:12.951   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:10:12.951   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:10:12.951   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs
00:10:12.951   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no
00:10:12.951   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns
00:10:12.951   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:12.951   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:12.951    23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:10:12.951   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:10:12.951   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:10:12.951   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable
00:10:12.951   23:51:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:10:19.520   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=()
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=()
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=()
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=()
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=()
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=()
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=()
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:10:19.521  Found 0000:af:00.0 (0x8086 - 0x159b)
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:10:19.521  Found 0000:af:00.1 (0x8086 - 0x159b)
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:10:19.521  Found net devices under 0000:af:00.0: cvl_0_0
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:10:19.521  Found net devices under 0000:af:00.1: cvl_0_1
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:10:19.521  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:10:19.521  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms
00:10:19.521  
00:10:19.521  --- 10.0.0.2 ping statistics ---
00:10:19.521  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:19.521  rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:10:19.521  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:10:19.521  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms
00:10:19.521  
00:10:19.521  --- 10.0.0.1 ping statistics ---
00:10:19.521  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:10:19.521  rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:10:19.521   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2919041
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2919041
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2919041 ']'
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:19.522  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:10:19.522  [2024-12-09 23:51:34.742312] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:10:19.522  [2024-12-09 23:51:34.742355] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:19.522  [2024-12-09 23:51:34.814282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:19.522  [2024-12-09 23:51:34.855522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:10:19.522  [2024-12-09 23:51:34.855557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:10:19.522  [2024-12-09 23:51:34.855564] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:10:19.522  [2024-12-09 23:51:34.855570] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:10:19.522  [2024-12-09 23:51:34.855576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:10:19.522  [2024-12-09 23:51:34.856930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:19.522  [2024-12-09 23:51:34.857040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:19.522  [2024-12-09 23:51:34.857040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000
00:10:19.522   23:51:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:10:19.522  [2024-12-09 23:51:35.177991] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:10:19.522   23:51:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:10:19.781   23:51:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:10:19.781  [2024-12-09 23:51:35.571444] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:10:19.781   23:51:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:10:20.041   23:51:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0
00:10:20.299  Malloc0
00:10:20.299   23:51:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:10:20.558  Delay0
00:10:20.558   23:51:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:20.558   23:51:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512
00:10:20.816  NULL1
00:10:20.816   23:51:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1
00:10:21.075   23:51:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000
00:10:21.075   23:51:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2919507
00:10:21.075   23:51:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:21.075   23:51:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:21.333   23:51:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:21.592   23:51:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001
00:10:21.592   23:51:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001
00:10:21.592  true
00:10:21.592   23:51:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:21.592   23:51:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:21.851   23:51:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:22.109   23:51:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002
00:10:22.109   23:51:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002
00:10:22.368  true
00:10:22.368   23:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:22.368   23:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:22.627   23:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:22.886   23:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003
00:10:22.886   23:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003
00:10:22.886  true
00:10:22.886   23:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:22.886   23:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:23.146   23:51:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:23.405   23:51:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004
00:10:23.406   23:51:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004
00:10:23.669  true
00:10:23.669   23:51:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:23.669   23:51:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:24.058   23:51:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:24.058   23:51:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005
00:10:24.058   23:51:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005
00:10:24.375  true
00:10:24.375   23:51:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:24.375   23:51:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:24.634   23:51:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:24.634   23:51:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006
00:10:24.634   23:51:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006
00:10:24.962  true
00:10:24.962   23:51:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:24.962   23:51:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:25.221   23:51:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:25.480   23:51:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007
00:10:25.480   23:51:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007
00:10:25.480  true
00:10:25.480   23:51:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:25.480   23:51:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:25.740   23:51:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:25.999   23:51:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008
00:10:25.999   23:51:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008
00:10:26.259  true
00:10:26.259   23:51:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:26.259   23:51:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:26.518   23:51:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:26.776   23:51:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009
00:10:26.776   23:51:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009
00:10:26.776  true
00:10:27.034   23:51:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:27.034   23:51:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:27.034   23:51:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:27.292   23:51:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010
00:10:27.292   23:51:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010
00:10:27.550  true
00:10:27.550   23:51:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:27.550   23:51:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:27.807   23:51:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:28.065   23:51:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011
00:10:28.065   23:51:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011
00:10:28.323  true
00:10:28.323   23:51:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:28.324   23:51:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:28.324   23:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:28.582   23:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012
00:10:28.582   23:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012
00:10:28.840  true
00:10:28.840   23:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:28.840   23:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:29.098   23:51:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:29.356   23:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013
00:10:29.356   23:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013
00:10:29.613  true
00:10:29.613   23:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:29.613   23:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:29.872   23:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:29.872   23:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014
00:10:29.872   23:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014
00:10:30.131  true
00:10:30.131   23:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:30.131   23:51:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:30.389   23:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:30.647   23:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015
00:10:30.648   23:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015
00:10:30.906  true
00:10:30.906   23:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:30.906   23:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:31.164   23:51:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:31.164   23:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016
00:10:31.164   23:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016
00:10:31.422  true
00:10:31.422   23:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:31.422   23:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:31.680   23:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:31.938   23:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017
00:10:31.938   23:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017
00:10:32.196  true
00:10:32.196   23:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:32.196   23:51:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:32.453   23:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:32.453   23:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018
00:10:32.453   23:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018
00:10:32.711  true
00:10:32.711   23:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:32.711   23:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:32.969   23:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:33.227   23:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019
00:10:33.227   23:51:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019
00:10:33.485  true
00:10:33.485   23:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:33.485   23:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:33.743   23:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:34.027   23:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020
00:10:34.027   23:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020
00:10:34.027  true
00:10:34.027   23:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:34.027   23:51:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:34.285   23:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:34.542   23:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021
00:10:34.542   23:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021
00:10:34.800  true
00:10:34.800   23:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:34.800   23:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:35.058   23:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:35.058   23:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022
00:10:35.058   23:51:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022
00:10:35.316  true
00:10:35.316   23:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:35.316   23:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:35.575   23:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:35.834   23:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023
00:10:35.834   23:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023
00:10:36.092  true
00:10:36.092   23:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:36.092   23:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:36.349   23:51:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:36.608   23:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024
00:10:36.608   23:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024
00:10:36.608  true
00:10:36.608   23:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:36.608   23:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:36.866   23:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:37.124   23:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025
00:10:37.124   23:51:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025
00:10:37.383  true
00:10:37.383   23:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:37.383   23:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:37.641   23:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:37.898   23:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026
00:10:37.898   23:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026
00:10:37.899  true
00:10:37.899   23:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:37.899   23:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:38.156   23:51:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:38.415   23:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027
00:10:38.415   23:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027
00:10:38.673  true
00:10:38.673   23:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:38.673   23:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:38.931   23:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:39.189   23:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028
00:10:39.189   23:51:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028
00:10:39.189  true
00:10:39.189   23:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:39.189   23:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:39.447   23:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:39.705   23:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029
00:10:39.705   23:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029
00:10:39.964  true
00:10:39.964   23:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:39.964   23:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:40.222   23:51:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:40.480   23:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030
00:10:40.480   23:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030
00:10:40.480  true
00:10:40.480   23:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:40.480   23:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:40.738   23:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:40.997   23:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031
00:10:40.997   23:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031
00:10:41.255  true
00:10:41.255   23:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:41.255   23:51:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:41.513   23:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:41.772   23:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032
00:10:41.772   23:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032
00:10:41.772  true
00:10:41.772   23:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:41.772   23:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:42.031   23:51:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:42.290   23:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033
00:10:42.290   23:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033
00:10:42.549  true
00:10:42.549   23:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:42.549   23:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:42.807   23:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:43.066   23:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034
00:10:43.066   23:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034
00:10:43.324  true
00:10:43.324   23:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:43.324   23:51:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:43.324   23:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:43.583   23:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035
00:10:43.583   23:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035
00:10:43.841  true
00:10:43.841   23:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:43.841   23:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:44.100   23:51:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:44.358   23:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036
00:10:44.359   23:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036
00:10:44.359  true
00:10:44.617   23:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:44.617   23:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:44.618   23:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:44.887   23:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037
00:10:44.887   23:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037
00:10:45.145  true
00:10:45.145   23:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:45.145   23:52:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:45.404   23:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:45.662   23:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038
00:10:45.662   23:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038
00:10:45.662  true
00:10:45.921   23:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:45.921   23:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:45.921   23:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:46.179   23:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039
00:10:46.179   23:52:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039
00:10:46.438  true
00:10:46.438   23:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:46.438   23:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:46.697   23:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:46.956   23:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040
00:10:46.956   23:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040
00:10:46.956  true
00:10:47.214   23:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:47.214   23:52:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:47.214   23:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:47.473   23:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041
00:10:47.473   23:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041
00:10:47.731  true
00:10:47.731   23:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:47.731   23:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:47.990   23:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:48.249   23:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042
00:10:48.249   23:52:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042
00:10:48.508  true
00:10:48.508   23:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:48.508   23:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:48.508   23:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:48.766   23:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043
00:10:48.766   23:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043
00:10:49.026  true
00:10:49.026   23:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:49.026   23:52:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:49.285   23:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:49.543   23:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044
00:10:49.543   23:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044
00:10:49.802  true
00:10:49.802   23:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:49.802   23:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:49.802   23:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:50.060   23:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045
00:10:50.060   23:52:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045
00:10:50.318  true
00:10:50.318   23:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:50.318   23:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:50.577   23:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:50.836   23:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046
00:10:50.836   23:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046
00:10:51.095  true
00:10:51.095   23:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:51.095   23:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:51.353   23:52:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:10:51.353  Initializing NVMe Controllers
00:10:51.353  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:10:51.354  Controller SPDK bdev Controller (SPDK00000000000001  ): Skipping inactive NS 1
00:10:51.354  Controller IO queue size 128, less than required.
00:10:51.354  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:10:51.354  WARNING: Some requested NVMe devices were skipped
00:10:51.354  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:10:51.354  Initialization complete. Launching workers.
00:10:51.354  ========================================================
00:10:51.354                                                                                                               Latency(us)
00:10:51.354  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:10:51.354  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:   27759.63      13.55    4610.87    2104.16   43336.38
00:10:51.354  ========================================================
00:10:51.354  Total                                                                    :   27759.63      13.55    4610.87    2104.16   43336.38
00:10:51.354  
00:10:51.354   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047
00:10:51.354   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047
00:10:51.612  true
00:10:51.612   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2919507
00:10:51.612  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2919507) - No such process
00:10:51.612   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2919507
00:10:51.612   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:51.871   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:10:52.129   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8
00:10:52.129   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=()
00:10:52.129   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 ))
00:10:52.129   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:10:52.129   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096
00:10:52.129  null0
00:10:52.129   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:10:52.129   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:10:52.129   23:52:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096
00:10:52.388  null1
00:10:52.388   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:10:52.388   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:10:52.388   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096
00:10:52.646  null2
00:10:52.646   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:10:52.646   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:10:52.646   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096
00:10:52.905  null3
00:10:52.905   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:10:52.905   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:10:52.905   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096
00:10:52.905  null4
00:10:53.163   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:10:53.163   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:10:53.163   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096
00:10:53.163  null5
00:10:53.163   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:10:53.163   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:10:53.163   23:52:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096
00:10:53.422  null6
00:10:53.422   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:10:53.422   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:10:53.422   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096
00:10:53.681  null7
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.681   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:10:53.682   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2925020 2925021 2925023 2925025 2925027 2925029 2925031 2925033
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:53.941   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:10:54.200   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:10:54.200   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:10:54.200   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:10:54.200   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:10:54.200   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:10:54.200   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:10:54.200   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:54.200   23:52:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:10:54.458   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.459   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:10:54.718   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:10:54.718   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:10:54.718   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:10:54.718   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:54.718   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:10:54.718   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:10:54.718   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:10:54.718   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:10:54.977   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:10:55.236   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:10:55.236   23:52:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:10:55.236   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.236   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.237   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:10:55.496   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:10:55.496   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:10:55.496   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:10:55.496   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:55.496   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:10:55.496   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:10:55.496   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:10:55.496   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:55.755   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.014   23:52:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:10:56.273   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:10:56.273   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:10:56.273   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:10:56.273   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:10:56.273   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:56.273   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:10:56.273   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:10:56.273   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:56.530   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:10:56.788   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:10:56.788   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:10:56.788   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:56.788   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:10:56.788   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:10:56.788   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:10:56.788   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:10:56.788   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:10:57.047   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:10:57.306   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:57.306   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:10:57.306   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:10:57.306   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:10:57.306   23:52:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.306   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:10:57.566   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:10:57.566   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:10:57.566   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:10:57.566   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:10:57.566   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:10:57.566   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:10:57.566   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:10:57.566   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20}
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:10:57.825  rmmod nvme_tcp
00:10:57.825  rmmod nvme_fabrics
00:10:57.825  rmmod nvme_keyring
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2919041 ']'
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2919041
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2919041 ']'
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2919041
00:10:57.825    23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:57.825    23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2919041
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2919041'
00:10:57.825  killing process with pid 2919041
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2919041
00:10:57.825   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2919041
00:10:58.085   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:10:58.085   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:10:58.085   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:10:58.085   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr
00:10:58.085   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save
00:10:58.085   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:10:58.085   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore
00:10:58.085   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:10:58.085   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns
00:10:58.085   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:10:58.085   23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:10:58.085    23:52:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:00.137   23:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:11:00.137  
00:11:00.137  real	0m47.446s
00:11:00.137  user	3m21.198s
00:11:00.137  sys	0m17.621s
00:11:00.137   23:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:00.137   23:52:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:11:00.137  ************************************
00:11:00.137  END TEST nvmf_ns_hotplug_stress
00:11:00.137  ************************************
00:11:00.137   23:52:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp
00:11:00.137   23:52:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:00.137   23:52:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:00.137   23:52:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:11:00.137  ************************************
00:11:00.137  START TEST nvmf_delete_subsystem
00:11:00.137  ************************************
00:11:00.137   23:52:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp
00:11:00.396  * Looking for test storage...
00:11:00.396  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:00.396     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version
00:11:00.396     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-:
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-:
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<'
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:00.396     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1
00:11:00.396     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1
00:11:00.396     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:00.396     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1
00:11:00.396    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1
00:11:00.396     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2
00:11:00.396     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2
00:11:00.396     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:00.396     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:00.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:00.397  		--rc genhtml_branch_coverage=1
00:11:00.397  		--rc genhtml_function_coverage=1
00:11:00.397  		--rc genhtml_legend=1
00:11:00.397  		--rc geninfo_all_blocks=1
00:11:00.397  		--rc geninfo_unexecuted_blocks=1
00:11:00.397  		
00:11:00.397  		'
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:00.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:00.397  		--rc genhtml_branch_coverage=1
00:11:00.397  		--rc genhtml_function_coverage=1
00:11:00.397  		--rc genhtml_legend=1
00:11:00.397  		--rc geninfo_all_blocks=1
00:11:00.397  		--rc geninfo_unexecuted_blocks=1
00:11:00.397  		
00:11:00.397  		'
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:00.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:00.397  		--rc genhtml_branch_coverage=1
00:11:00.397  		--rc genhtml_function_coverage=1
00:11:00.397  		--rc genhtml_legend=1
00:11:00.397  		--rc geninfo_all_blocks=1
00:11:00.397  		--rc geninfo_unexecuted_blocks=1
00:11:00.397  		
00:11:00.397  		'
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:00.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:00.397  		--rc genhtml_branch_coverage=1
00:11:00.397  		--rc genhtml_function_coverage=1
00:11:00.397  		--rc genhtml_legend=1
00:11:00.397  		--rc geninfo_all_blocks=1
00:11:00.397  		--rc geninfo_unexecuted_blocks=1
00:11:00.397  		
00:11:00.397  		'
00:11:00.397   23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:11:00.397     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:00.397     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:11:00.397     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob
00:11:00.397     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:00.397     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:00.397     23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:00.397      23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:00.397      23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:00.397      23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:00.397      23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH
00:11:00.397      23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:00.397  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:00.397   23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit
00:11:00.397   23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:11:00.397   23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:00.397   23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:00.397   23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:00.397   23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:00.397   23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:00.397   23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:00.397    23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:00.397   23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:11:00.397   23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:11:00.397   23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable
00:11:00.397   23:52:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=()
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=()
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=()
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=()
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=()
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=()
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=()
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:11:06.976  Found 0000:af:00.0 (0x8086 - 0x159b)
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:11:06.976  Found 0000:af:00.1 (0x8086 - 0x159b)
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:11:06.976  Found net devices under 0000:af:00.0: cvl_0_0
00:11:06.976   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:11:06.977  Found net devices under 0000:af:00.1: cvl_0_1
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:11:06.977   23:52:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:11:06.977  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:06.977  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms
00:11:06.977  
00:11:06.977  --- 10.0.0.2 ping statistics ---
00:11:06.977  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:06.977  rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:11:06.977  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:06.977  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms
00:11:06.977  
00:11:06.977  --- 10.0.0.1 ping statistics ---
00:11:06.977  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:06.977  rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2929421
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2929421
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2929421 ']'
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:06.977  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:06.977  [2024-12-09 23:52:22.221820] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:11:06.977  [2024-12-09 23:52:22.221864] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:06.977  [2024-12-09 23:52:22.298730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:11:06.977  [2024-12-09 23:52:22.338129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:06.977  [2024-12-09 23:52:22.338164] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:06.977  [2024-12-09 23:52:22.338179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:06.977  [2024-12-09 23:52:22.338185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:06.977  [2024-12-09 23:52:22.338191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:06.977  [2024-12-09 23:52:22.339349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:06.977  [2024-12-09 23:52:22.339351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:06.977  [2024-12-09 23:52:22.483609] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:06.977  [2024-12-09 23:52:22.503831] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:06.977  NULL1
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:06.977   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:11:06.978   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:06.978   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:06.978  Delay0
00:11:06.978   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:06.978   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:11:06.978   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:06.978   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:06.978   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:06.978   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2929576
00:11:06.978   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2
00:11:06.978   23:52:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4
00:11:06.978  [2024-12-09 23:52:22.615604] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:11:08.880   23:52:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:11:08.880   23:52:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:08.880   23:52:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  [2024-12-09 23:52:24.732486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d960 is same with the state(6) to be set
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Write completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.880  starting I/O failed: -6
00:11:08.880  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  starting I/O failed: -6
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  starting I/O failed: -6
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  starting I/O failed: -6
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  [2024-12-09 23:52:24.734644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0f9c00d060 is same with the state(6) to be set
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  [2024-12-09 23:52:24.735088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0f9c00d390 is same with the state(6) to be set
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Write completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  Read completed with error (sct=0, sc=8)
00:11:08.881  [2024-12-09 23:52:24.735268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0f9c000c80 is same with the state(6) to be set
00:11:10.256  [2024-12-09 23:52:25.709820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200e9b0 is same with the state(6) to be set
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Write completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  [2024-12-09 23:52:25.731326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f0f9c00d6c0 is same with the state(6) to be set
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Write completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Write completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Write completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Write completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Write completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  [2024-12-09 23:52:25.735711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d2c0 is same with the state(6) to be set
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Write completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Write completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.256  Read completed with error (sct=0, sc=8)
00:11:10.257  Write completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Write completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  [2024-12-09 23:52:25.736883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d780 is same with the state(6) to be set
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Write completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Write completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Write completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Write completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Write completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  Read completed with error (sct=0, sc=8)
00:11:10.257  [2024-12-09 23:52:25.737453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200db40 is same with the state(6) to be set
00:11:10.257  Initializing NVMe Controllers
00:11:10.257  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:11:10.257  Controller IO queue size 128, less than required.
00:11:10.257  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:11:10.257  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:11:10.257  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:11:10.257  Initialization complete. Launching workers.
00:11:10.257  ========================================================
00:11:10.257                                                                                                               Latency(us)
00:11:10.257  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:11:10.257  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     168.62       0.08  968680.06    2316.68 1043572.50
00:11:10.257  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     145.74       0.07  915785.87     459.16 1010312.13
00:11:10.257  ========================================================
00:11:10.257  Total                                                                    :     314.37       0.15  944157.92     459.16 1043572.50
00:11:10.257  
00:11:10.257  [2024-12-09 23:52:25.738046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200e9b0 (9): Bad file descriptor
00:11:10.257  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred
00:11:10.257   23:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:10.257   23:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0
00:11:10.257   23:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2929576
00:11:10.257   23:52:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 ))
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2929576
00:11:10.516  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2929576) - No such process
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2929576
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2929576
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:10.516    23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2929576
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:10.516  [2024-12-09 23:52:26.264747] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2930169
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930169
00:11:10.516   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:11:10.516  [2024-12-09 23:52:26.357711] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:11:11.083   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:11:11.083   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930169
00:11:11.083   23:52:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:11:11.661   23:52:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:11:11.661   23:52:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930169
00:11:11.661   23:52:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:11:12.228   23:52:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:11:12.228   23:52:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930169
00:11:12.228   23:52:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:11:12.487   23:52:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:11:12.487   23:52:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930169
00:11:12.487   23:52:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:11:13.054   23:52:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:11:13.054   23:52:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930169
00:11:13.054   23:52:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:11:13.624   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:11:13.624   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930169
00:11:13.624   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:11:13.883  Initializing NVMe Controllers
00:11:13.883  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:11:13.883  Controller IO queue size 128, less than required.
00:11:13.883  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:11:13.883  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:11:13.883  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:11:13.883  Initialization complete. Launching workers.
00:11:13.883  ========================================================
00:11:13.883                                                                                                               Latency(us)
00:11:13.883  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:11:13.883  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     128.00       0.06 1002362.82 1000132.90 1041440.10
00:11:13.883  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     128.00       0.06 1004376.54 1000187.66 1041669.40
00:11:13.883  ========================================================
00:11:13.883  Total                                                                    :     256.00       0.12 1003369.68 1000132.90 1041669.40
00:11:13.883  
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2930169
00:11:14.142  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2930169) - No such process
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2930169
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:11:14.142  rmmod nvme_tcp
00:11:14.142  rmmod nvme_fabrics
00:11:14.142  rmmod nvme_keyring
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2929421 ']'
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2929421
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2929421 ']'
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2929421
00:11:14.142    23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:14.142    23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2929421
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2929421'
00:11:14.142  killing process with pid 2929421
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2929421
00:11:14.142   23:52:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2929421
00:11:14.402   23:52:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:14.402   23:52:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:11:14.402   23:52:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:11:14.402   23:52:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr
00:11:14.402   23:52:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save
00:11:14.402   23:52:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:11:14.402   23:52:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore
00:11:14.402   23:52:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:11:14.402   23:52:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns
00:11:14.402   23:52:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:14.402   23:52:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:14.402    23:52:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:16.307   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:11:16.567  
00:11:16.567  real	0m16.180s
00:11:16.567  user	0m29.303s
00:11:16.567  sys	0m5.392s
00:11:16.567   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:16.567   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:11:16.567  ************************************
00:11:16.567  END TEST nvmf_delete_subsystem
00:11:16.567  ************************************
00:11:16.567   23:52:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp
00:11:16.567   23:52:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:16.567   23:52:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:16.567   23:52:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:11:16.567  ************************************
00:11:16.567  START TEST nvmf_host_management
00:11:16.567  ************************************
00:11:16.567   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp
00:11:16.567  * Looking for test storage...
00:11:16.567  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:11:16.567    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:16.567     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version
00:11:16.567     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:16.567    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:16.567    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:16.567    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:16.567    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:16.567    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-:
00:11:16.567    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1
00:11:16.567    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-:
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<'
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:16.568     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1
00:11:16.568     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1
00:11:16.568     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:16.568     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1
00:11:16.568     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2
00:11:16.568     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2
00:11:16.568     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:16.568     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:16.568  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:16.568  		--rc genhtml_branch_coverage=1
00:11:16.568  		--rc genhtml_function_coverage=1
00:11:16.568  		--rc genhtml_legend=1
00:11:16.568  		--rc geninfo_all_blocks=1
00:11:16.568  		--rc geninfo_unexecuted_blocks=1
00:11:16.568  		
00:11:16.568  		'
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:16.568  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:16.568  		--rc genhtml_branch_coverage=1
00:11:16.568  		--rc genhtml_function_coverage=1
00:11:16.568  		--rc genhtml_legend=1
00:11:16.568  		--rc geninfo_all_blocks=1
00:11:16.568  		--rc geninfo_unexecuted_blocks=1
00:11:16.568  		
00:11:16.568  		'
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:16.568  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:16.568  		--rc genhtml_branch_coverage=1
00:11:16.568  		--rc genhtml_function_coverage=1
00:11:16.568  		--rc genhtml_legend=1
00:11:16.568  		--rc geninfo_all_blocks=1
00:11:16.568  		--rc geninfo_unexecuted_blocks=1
00:11:16.568  		
00:11:16.568  		'
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:16.568  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:16.568  		--rc genhtml_branch_coverage=1
00:11:16.568  		--rc genhtml_function_coverage=1
00:11:16.568  		--rc genhtml_legend=1
00:11:16.568  		--rc geninfo_all_blocks=1
00:11:16.568  		--rc geninfo_unexecuted_blocks=1
00:11:16.568  		
00:11:16.568  		'
00:11:16.568   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:11:16.568     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:16.568    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:16.568     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:11:16.828     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob
00:11:16.828     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:16.828     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:16.828     23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:16.828      23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:16.828      23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:16.828      23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:16.828      23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH
00:11:16.828      23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:16.828  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:16.828    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:16.829    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:16.829    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:16.829    23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable
00:11:16.829   23:52:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=()
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=()
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=()
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=()
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=()
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=()
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=()
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:11:23.399   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:11:23.400  Found 0000:af:00.0 (0x8086 - 0x159b)
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:11:23.400  Found 0000:af:00.1 (0x8086 - 0x159b)
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:11:23.400  Found net devices under 0000:af:00.0: cvl_0_0
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:11:23.400  Found net devices under 0000:af:00.1: cvl_0_1
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:11:23.400  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:23.400  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms
00:11:23.400  
00:11:23.400  --- 10.0.0.2 ping statistics ---
00:11:23.400  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:23.400  rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:11:23.400  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:23.400  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms
00:11:23.400  
00:11:23.400  --- 10.0.0.1 ping statistics ---
00:11:23.400  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:23.400  rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2934230
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2934230
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2934230 ']'
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:23.400  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.400  [2024-12-09 23:52:38.490758] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:11:23.400  [2024-12-09 23:52:38.490802] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:23.400  [2024-12-09 23:52:38.569930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:11:23.400  [2024-12-09 23:52:38.611313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:23.400  [2024-12-09 23:52:38.611353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:23.400  [2024-12-09 23:52:38.611360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:23.400  [2024-12-09 23:52:38.611366] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:23.400  [2024-12-09 23:52:38.611371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:23.400  [2024-12-09 23:52:38.612913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:11:23.400  [2024-12-09 23:52:38.613020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:11:23.400  [2024-12-09 23:52:38.613125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:23.400  [2024-12-09 23:52:38.613126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:11:23.400   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.401  [2024-12-09 23:52:38.750465] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.401  Malloc0
00:11:23.401  [2024-12-09 23:52:38.820973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2934466
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2934466 /var/tmp/bdevperf.sock
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2934466 ']'
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:11:23.401    23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:11:23.401  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:11:23.401    23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:23.401    23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:11:23.401   23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.401    23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:11:23.401    23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:11:23.401  {
00:11:23.401    "params": {
00:11:23.401      "name": "Nvme$subsystem",
00:11:23.401      "trtype": "$TEST_TRANSPORT",
00:11:23.401      "traddr": "$NVMF_FIRST_TARGET_IP",
00:11:23.401      "adrfam": "ipv4",
00:11:23.401      "trsvcid": "$NVMF_PORT",
00:11:23.401      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:11:23.401      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:11:23.401      "hdgst": ${hdgst:-false},
00:11:23.401      "ddgst": ${ddgst:-false}
00:11:23.401    },
00:11:23.401    "method": "bdev_nvme_attach_controller"
00:11:23.401  }
00:11:23.401  EOF
00:11:23.401  )")
00:11:23.401     23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:11:23.401    23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:11:23.401     23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:11:23.401     23:52:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:11:23.401    "params": {
00:11:23.401      "name": "Nvme0",
00:11:23.401      "trtype": "tcp",
00:11:23.401      "traddr": "10.0.0.2",
00:11:23.401      "adrfam": "ipv4",
00:11:23.401      "trsvcid": "4420",
00:11:23.401      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:11:23.401      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:11:23.401      "hdgst": false,
00:11:23.401      "ddgst": false
00:11:23.401    },
00:11:23.401    "method": "bdev_nvme_attach_controller"
00:11:23.401  }'
00:11:23.401  [2024-12-09 23:52:38.916521] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:11:23.401  [2024-12-09 23:52:38.916566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2934466 ]
00:11:23.401  [2024-12-09 23:52:38.990840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:23.401  [2024-12-09 23:52:39.030227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:23.401  Running I/O for 10 seconds...
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']'
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 ))
00:11:23.401   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 ))
00:11:23.660    23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:11:23.660    23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:11:23.660    23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:23.660    23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.660    23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:23.660   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78
00:11:23.660   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']'
00:11:23.660   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25
00:11:23.920   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- ))
00:11:23.920   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 ))
00:11:23.920    23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:11:23.920    23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:11:23.920    23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:23.920    23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.920    23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:23.921   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707
00:11:23.921   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']'
00:11:23.921   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0
00:11:23.921   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break
00:11:23.921   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0
00:11:23.921   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:11:23.921   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:23.921   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.921  [2024-12-09 23:52:39.599874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.599911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.599928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.599935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.599944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.599950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.599959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.599965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.599973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.599979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.599987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.599994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.921  [2024-12-09 23:52:39.600436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.921  [2024-12-09 23:52:39.600442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:11:23.922  [2024-12-09 23:52:39.600846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:11:23.922  [2024-12-09 23:52:39.600854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb6d770 is same with the state(6) to be set
00:11:23.922  [2024-12-09 23:52:39.601845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:11:23.922  task offset: 100736 on job bdev=Nvme0n1 fails
00:11:23.922  
00:11:23.922                                                                                                  Latency(us)
00:11:23.922  
[2024-12-09T22:52:39.779Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:23.922  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:11:23.922  Job: Nvme0n1 ended in about 0.40 seconds with error
00:11:23.922  	 Verification LBA range: start 0x0 length 0x400
00:11:23.922  	 Nvme0n1             :       0.40    1904.18     119.01     158.68     0.00   30210.47    1466.76   27213.04
00:11:23.922  
[2024-12-09T22:52:39.779Z]  ===================================================================================================================
00:11:23.922  
[2024-12-09T22:52:39.779Z]  Total                       :               1904.18     119.01     158.68     0.00   30210.47    1466.76   27213.04
00:11:23.922  [2024-12-09 23:52:39.604232] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:11:23.922  [2024-12-09 23:52:39.604252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9547e0 (9): Bad file descriptor
00:11:23.922   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:23.922   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:11:23.922   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:23.922   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:23.922   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:23.922   23:52:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1
00:11:23.922  [2024-12-09 23:52:39.624947] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful.
00:11:24.859   23:52:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2934466
00:11:24.859  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2934466) - No such process
00:11:24.859   23:52:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true
00:11:24.859   23:52:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004
00:11:24.859   23:52:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1
00:11:24.859    23:52:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0
00:11:24.859    23:52:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:11:24.859    23:52:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:11:24.859    23:52:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:11:24.859    23:52:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:11:24.859  {
00:11:24.859    "params": {
00:11:24.859      "name": "Nvme$subsystem",
00:11:24.859      "trtype": "$TEST_TRANSPORT",
00:11:24.859      "traddr": "$NVMF_FIRST_TARGET_IP",
00:11:24.859      "adrfam": "ipv4",
00:11:24.859      "trsvcid": "$NVMF_PORT",
00:11:24.859      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:11:24.859      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:11:24.859      "hdgst": ${hdgst:-false},
00:11:24.859      "ddgst": ${ddgst:-false}
00:11:24.859    },
00:11:24.859    "method": "bdev_nvme_attach_controller"
00:11:24.859  }
00:11:24.859  EOF
00:11:24.859  )")
00:11:24.859     23:52:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:11:24.859    23:52:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:11:24.859     23:52:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:11:24.859     23:52:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:11:24.859    "params": {
00:11:24.859      "name": "Nvme0",
00:11:24.859      "trtype": "tcp",
00:11:24.859      "traddr": "10.0.0.2",
00:11:24.859      "adrfam": "ipv4",
00:11:24.859      "trsvcid": "4420",
00:11:24.859      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:11:24.859      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:11:24.859      "hdgst": false,
00:11:24.859      "ddgst": false
00:11:24.859    },
00:11:24.859    "method": "bdev_nvme_attach_controller"
00:11:24.859  }'
00:11:24.859  [2024-12-09 23:52:40.668789] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:11:24.860  [2024-12-09 23:52:40.668837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2934713 ]
00:11:25.118  [2024-12-09 23:52:40.744774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:25.118  [2024-12-09 23:52:40.784234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:25.377  Running I/O for 1 seconds...
00:11:26.314       1984.00 IOPS,   124.00 MiB/s
00:11:26.314                                                                                                  Latency(us)
00:11:26.314  
[2024-12-09T22:52:42.171Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:26.314  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:11:26.314  	 Verification LBA range: start 0x0 length 0x400
00:11:26.314  	 Nvme0n1             :       1.02    2015.73     125.98       0.00     0.00   31262.16    5991.86   27587.54
00:11:26.314  
[2024-12-09T22:52:42.171Z]  ===================================================================================================================
00:11:26.314  
[2024-12-09T22:52:42.171Z]  Total                       :               2015.73     125.98       0.00     0.00   31262.16    5991.86   27587.54
00:11:26.314   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget
00:11:26.314   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state
00:11:26.314   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:11:26.314   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:11:26.314   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini
00:11:26.314   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:26.314   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:11:26.573  rmmod nvme_tcp
00:11:26.573  rmmod nvme_fabrics
00:11:26.573  rmmod nvme_keyring
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2934230 ']'
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2934230
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2934230 ']'
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2934230
00:11:26.573    23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:26.573    23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2934230
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2934230'
00:11:26.573  killing process with pid 2934230
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2934230
00:11:26.573   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2934230
00:11:26.833  [2024-12-09 23:52:42.444656] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2
00:11:26.833   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:26.833   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:11:26.833   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:11:26.833   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr
00:11:26.833   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save
00:11:26.833   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:11:26.833   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore
00:11:26.833   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:11:26.833   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns
00:11:26.833   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:26.833   23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:26.833    23:52:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:28.740   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:11:28.740   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT
00:11:28.740  
00:11:28.740  real	0m12.304s
00:11:28.740  user	0m19.212s
00:11:28.740  sys	0m5.585s
00:11:28.740   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:28.740   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:11:28.740  ************************************
00:11:28.740  END TEST nvmf_host_management
00:11:28.740  ************************************
00:11:28.740   23:52:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp
00:11:28.740   23:52:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:28.740   23:52:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:28.740   23:52:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:11:29.000  ************************************
00:11:29.000  START TEST nvmf_lvol
00:11:29.000  ************************************
00:11:29.000   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp
00:11:29.000  * Looking for test storage...
00:11:29.001  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-:
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-:
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<'
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:29.001  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:29.001  		--rc genhtml_branch_coverage=1
00:11:29.001  		--rc genhtml_function_coverage=1
00:11:29.001  		--rc genhtml_legend=1
00:11:29.001  		--rc geninfo_all_blocks=1
00:11:29.001  		--rc geninfo_unexecuted_blocks=1
00:11:29.001  		
00:11:29.001  		'
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:29.001  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:29.001  		--rc genhtml_branch_coverage=1
00:11:29.001  		--rc genhtml_function_coverage=1
00:11:29.001  		--rc genhtml_legend=1
00:11:29.001  		--rc geninfo_all_blocks=1
00:11:29.001  		--rc geninfo_unexecuted_blocks=1
00:11:29.001  		
00:11:29.001  		'
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:29.001  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:29.001  		--rc genhtml_branch_coverage=1
00:11:29.001  		--rc genhtml_function_coverage=1
00:11:29.001  		--rc genhtml_legend=1
00:11:29.001  		--rc geninfo_all_blocks=1
00:11:29.001  		--rc geninfo_unexecuted_blocks=1
00:11:29.001  		
00:11:29.001  		'
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:29.001  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:29.001  		--rc genhtml_branch_coverage=1
00:11:29.001  		--rc genhtml_function_coverage=1
00:11:29.001  		--rc genhtml_legend=1
00:11:29.001  		--rc geninfo_all_blocks=1
00:11:29.001  		--rc geninfo_unexecuted_blocks=1
00:11:29.001  		
00:11:29.001  		'
00:11:29.001   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:29.001     23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:29.001      23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:29.001      23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:29.001      23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:29.001      23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH
00:11:29.001      23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:29.001  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:29.001    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:29.001   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64
00:11:29.001   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:11:29.001   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20
00:11:29.001   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30
00:11:29.002   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:11:29.002   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit
00:11:29.002   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:11:29.002   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:29.002   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:29.002   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:29.002   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:29.002   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:29.002   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:29.002    23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:29.002   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:11:29.002   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:11:29.002   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable
00:11:29.002   23:52:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=()
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=()
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=()
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=()
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=()
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=()
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=()
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:11:35.573  Found 0000:af:00.0 (0x8086 - 0x159b)
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:11:35.573  Found 0000:af:00.1 (0x8086 - 0x159b)
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:35.573   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:11:35.574  Found net devices under 0000:af:00.0: cvl_0_0
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:11:35.574  Found net devices under 0000:af:00.1: cvl_0_1
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:11:35.574  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:35.574  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms
00:11:35.574  
00:11:35.574  --- 10.0.0.2 ping statistics ---
00:11:35.574  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:35.574  rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:11:35.574  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:35.574  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms
00:11:35.574  
00:11:35.574  --- 10.0.0.1 ping statistics ---
00:11:35.574  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:35.574  rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2938487
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2938487
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2938487 ']'
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:35.574  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:35.574   23:52:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:11:35.574  [2024-12-09 23:52:50.869336] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:11:35.574  [2024-12-09 23:52:50.869378] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:35.574  [2024-12-09 23:52:50.944715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:11:35.574  [2024-12-09 23:52:50.983448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:35.574  [2024-12-09 23:52:50.983487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:35.574  [2024-12-09 23:52:50.983494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:35.574  [2024-12-09 23:52:50.983500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:35.574  [2024-12-09 23:52:50.983505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:35.574  [2024-12-09 23:52:50.984815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:35.574  [2024-12-09 23:52:50.984923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:35.574  [2024-12-09 23:52:50.984925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:11:35.574   23:52:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:35.574   23:52:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0
00:11:35.574   23:52:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:35.574   23:52:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:35.574   23:52:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:11:35.574   23:52:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:35.574   23:52:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:11:35.574  [2024-12-09 23:52:51.290443] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:35.575    23:52:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:35.834   23:52:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 '
00:11:35.834    23:52:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:11:36.093   23:52:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1
00:11:36.093   23:52:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1'
00:11:36.093    23:52:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs
00:11:36.352   23:52:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=08a39d90-3c1c-4e69-aafb-5abfc5989684
00:11:36.352    23:52:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 08a39d90-3c1c-4e69-aafb-5abfc5989684 lvol 20
00:11:36.611   23:52:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=24ef2d59-3808-4bf2-8d29-03d9ae36a238
00:11:36.611   23:52:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:11:36.869   23:52:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 24ef2d59-3808-4bf2-8d29-03d9ae36a238
00:11:37.128   23:52:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:11:37.128  [2024-12-09 23:52:52.910052] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:37.128   23:52:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:11:37.387   23:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2938901
00:11:37.387   23:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1
00:11:37.387   23:52:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18
00:11:38.339    23:52:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 24ef2d59-3808-4bf2-8d29-03d9ae36a238 MY_SNAPSHOT
00:11:38.598   23:52:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=0ff1dfec-d6c5-45f9-a809-3f3eebee8038
00:11:38.599   23:52:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 24ef2d59-3808-4bf2-8d29-03d9ae36a238 30
00:11:38.857    23:52:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 0ff1dfec-d6c5-45f9-a809-3f3eebee8038 MY_CLONE
00:11:39.117   23:52:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c94cc633-7946-44ea-a8be-dfbc56008740
00:11:39.117   23:52:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c94cc633-7946-44ea-a8be-dfbc56008740
00:11:39.685   23:52:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2938901
00:11:47.825  Initializing NVMe Controllers
00:11:47.825  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0
00:11:47.825  Controller IO queue size 128, less than required.
00:11:47.825  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:11:47.825  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3
00:11:47.825  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4
00:11:47.825  Initialization complete. Launching workers.
00:11:47.825  ========================================================
00:11:47.825                                                                                                               Latency(us)
00:11:47.825  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:11:47.825  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  3:   12004.80      46.89   10668.00    2103.66   55145.95
00:11:47.825  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  4:   11863.00      46.34   10790.23    3208.73   48744.52
00:11:47.825  ========================================================
00:11:47.825  Total                                                                    :   23867.80      93.23   10728.75    2103.66   55145.95
00:11:47.825  
00:11:47.825   23:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:11:47.825   23:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 24ef2d59-3808-4bf2-8d29-03d9ae36a238
00:11:48.083   23:53:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 08a39d90-3c1c-4e69-aafb-5abfc5989684
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20}
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:11:48.342  rmmod nvme_tcp
00:11:48.342  rmmod nvme_fabrics
00:11:48.342  rmmod nvme_keyring
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2938487 ']'
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2938487
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2938487 ']'
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2938487
00:11:48.342    23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname
00:11:48.342   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:48.342    23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2938487
00:11:48.343   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:48.343   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:48.343   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2938487'
00:11:48.343  killing process with pid 2938487
00:11:48.343   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2938487
00:11:48.343   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2938487
00:11:48.602   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:11:48.602   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:11:48.602   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:11:48.602   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr
00:11:48.602   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save
00:11:48.602   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:11:48.602   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore
00:11:48.602   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:11:48.602   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns
00:11:48.602   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:48.602   23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:48.602    23:53:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:51.139   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:11:51.139  
00:11:51.139  real	0m21.836s
00:11:51.139  user	1m2.759s
00:11:51.139  sys	0m7.620s
00:11:51.139   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:51.139   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:11:51.139  ************************************
00:11:51.139  END TEST nvmf_lvol
00:11:51.139  ************************************
00:11:51.139   23:53:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp
00:11:51.139   23:53:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:51.139   23:53:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:51.139   23:53:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:11:51.139  ************************************
00:11:51.139  START TEST nvmf_lvs_grow
00:11:51.139  ************************************
00:11:51.139   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp
00:11:51.139  * Looking for test storage...
00:11:51.139  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:11:51.139     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version
00:11:51.139     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-:
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-:
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<'
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:51.139     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1
00:11:51.139     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1
00:11:51.139     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:51.139     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1
00:11:51.139     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2
00:11:51.139     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2
00:11:51.139     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:51.139     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:51.139    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:11:51.140  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:51.140  		--rc genhtml_branch_coverage=1
00:11:51.140  		--rc genhtml_function_coverage=1
00:11:51.140  		--rc genhtml_legend=1
00:11:51.140  		--rc geninfo_all_blocks=1
00:11:51.140  		--rc geninfo_unexecuted_blocks=1
00:11:51.140  		
00:11:51.140  		'
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:11:51.140  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:51.140  		--rc genhtml_branch_coverage=1
00:11:51.140  		--rc genhtml_function_coverage=1
00:11:51.140  		--rc genhtml_legend=1
00:11:51.140  		--rc geninfo_all_blocks=1
00:11:51.140  		--rc geninfo_unexecuted_blocks=1
00:11:51.140  		
00:11:51.140  		'
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:11:51.140  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:51.140  		--rc genhtml_branch_coverage=1
00:11:51.140  		--rc genhtml_function_coverage=1
00:11:51.140  		--rc genhtml_legend=1
00:11:51.140  		--rc geninfo_all_blocks=1
00:11:51.140  		--rc geninfo_unexecuted_blocks=1
00:11:51.140  		
00:11:51.140  		'
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:11:51.140  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:51.140  		--rc genhtml_branch_coverage=1
00:11:51.140  		--rc genhtml_function_coverage=1
00:11:51.140  		--rc genhtml_legend=1
00:11:51.140  		--rc geninfo_all_blocks=1
00:11:51.140  		--rc geninfo_unexecuted_blocks=1
00:11:51.140  		
00:11:51.140  		'
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:11:51.140     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:11:51.140     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:11:51.140     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob
00:11:51.140     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:11:51.140     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:11:51.140     23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:11:51.140      23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:51.140      23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:51.140      23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:51.140      23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH
00:11:51.140      23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:11:51.140  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:11:51.140    23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable
00:11:51.140   23:53:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=()
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=()
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=()
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=()
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=()
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=()
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=()
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:11:57.713  Found 0000:af:00.0 (0x8086 - 0x159b)
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:11:57.713   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:11:57.714  Found 0000:af:00.1 (0x8086 - 0x159b)
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:11:57.714  Found net devices under 0000:af:00.0: cvl_0_0
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:11:57.714  Found net devices under 0000:af:00.1: cvl_0_1
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:11:57.714  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:11:57.714  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms
00:11:57.714  
00:11:57.714  --- 10.0.0.2 ping statistics ---
00:11:57.714  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:57.714  rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:11:57.714  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:11:57.714  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms
00:11:57.714  
00:11:57.714  --- 10.0.0.1 ping statistics ---
00:11:57.714  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:11:57.714  rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2944385
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2944385
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2944385 ']'
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:57.714  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:57.714   23:53:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:11:57.714  [2024-12-09 23:53:12.868576] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:11:57.714  [2024-12-09 23:53:12.868618] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:11:57.714  [2024-12-09 23:53:12.946056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:57.714  [2024-12-09 23:53:12.983718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:11:57.714  [2024-12-09 23:53:12.983753] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:11:57.714  [2024-12-09 23:53:12.983760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:11:57.714  [2024-12-09 23:53:12.983765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:11:57.714  [2024-12-09 23:53:12.983770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:11:57.714  [2024-12-09 23:53:12.984284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:11:57.714  [2024-12-09 23:53:13.288235] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:11:57.714  ************************************
00:11:57.714  START TEST lvs_grow_clean
00:11:57.714  ************************************
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:11:57.714   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:11:57.714    23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:11:57.972   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:11:57.972    23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:11:57.972   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1
00:11:57.972    23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1
00:11:57.972    23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:11:58.230   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:11:58.230   23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:11:58.230    23:53:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1 lvol 150
00:11:58.489   23:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=be02b138-a75a-4119-b3c1-dd79308cbf54
00:11:58.489   23:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:11:58.489   23:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:11:58.489  [2024-12-09 23:53:14.337678] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:11:58.489  [2024-12-09 23:53:14.337727] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:11:58.489  true
00:11:58.749    23:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1
00:11:58.749    23:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:11:58.749   23:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:11:58.749   23:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:11:59.009   23:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 be02b138-a75a-4119-b3c1-dd79308cbf54
00:11:59.268   23:53:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:11:59.268  [2024-12-09 23:53:15.075896] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:11:59.268   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:11:59.526   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2944718
00:11:59.526   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:11:59.526   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:11:59.526   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2944718 /var/tmp/bdevperf.sock
00:11:59.526   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2944718 ']'
00:11:59.526   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:11:59.526   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:59.526   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:11:59.526  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:11:59.526   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:59.526   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:11:59.526  [2024-12-09 23:53:15.318591] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:11:59.526  [2024-12-09 23:53:15.318636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2944718 ]
00:11:59.785  [2024-12-09 23:53:15.392200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:59.785  [2024-12-09 23:53:15.432917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:59.785   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:59.785   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0
00:11:59.785   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:12:00.058  Nvme0n1
00:12:00.058   23:53:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:12:00.317  [
00:12:00.317    {
00:12:00.317      "name": "Nvme0n1",
00:12:00.317      "aliases": [
00:12:00.318        "be02b138-a75a-4119-b3c1-dd79308cbf54"
00:12:00.318      ],
00:12:00.318      "product_name": "NVMe disk",
00:12:00.318      "block_size": 4096,
00:12:00.318      "num_blocks": 38912,
00:12:00.318      "uuid": "be02b138-a75a-4119-b3c1-dd79308cbf54",
00:12:00.318      "numa_id": 1,
00:12:00.318      "assigned_rate_limits": {
00:12:00.318        "rw_ios_per_sec": 0,
00:12:00.318        "rw_mbytes_per_sec": 0,
00:12:00.318        "r_mbytes_per_sec": 0,
00:12:00.318        "w_mbytes_per_sec": 0
00:12:00.318      },
00:12:00.318      "claimed": false,
00:12:00.318      "zoned": false,
00:12:00.318      "supported_io_types": {
00:12:00.318        "read": true,
00:12:00.318        "write": true,
00:12:00.318        "unmap": true,
00:12:00.318        "flush": true,
00:12:00.318        "reset": true,
00:12:00.318        "nvme_admin": true,
00:12:00.318        "nvme_io": true,
00:12:00.318        "nvme_io_md": false,
00:12:00.318        "write_zeroes": true,
00:12:00.318        "zcopy": false,
00:12:00.318        "get_zone_info": false,
00:12:00.318        "zone_management": false,
00:12:00.318        "zone_append": false,
00:12:00.318        "compare": true,
00:12:00.318        "compare_and_write": true,
00:12:00.318        "abort": true,
00:12:00.318        "seek_hole": false,
00:12:00.318        "seek_data": false,
00:12:00.318        "copy": true,
00:12:00.318        "nvme_iov_md": false
00:12:00.318      },
00:12:00.318      "memory_domains": [
00:12:00.318        {
00:12:00.318          "dma_device_id": "system",
00:12:00.318          "dma_device_type": 1
00:12:00.318        }
00:12:00.318      ],
00:12:00.318      "driver_specific": {
00:12:00.318        "nvme": [
00:12:00.318          {
00:12:00.318            "trid": {
00:12:00.318              "trtype": "TCP",
00:12:00.318              "adrfam": "IPv4",
00:12:00.318              "traddr": "10.0.0.2",
00:12:00.318              "trsvcid": "4420",
00:12:00.318              "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:12:00.318            },
00:12:00.318            "ctrlr_data": {
00:12:00.318              "cntlid": 1,
00:12:00.318              "vendor_id": "0x8086",
00:12:00.318              "model_number": "SPDK bdev Controller",
00:12:00.318              "serial_number": "SPDK0",
00:12:00.318              "firmware_revision": "25.01",
00:12:00.318              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:12:00.318              "oacs": {
00:12:00.318                "security": 0,
00:12:00.318                "format": 0,
00:12:00.318                "firmware": 0,
00:12:00.318                "ns_manage": 0
00:12:00.318              },
00:12:00.318              "multi_ctrlr": true,
00:12:00.318              "ana_reporting": false
00:12:00.318            },
00:12:00.318            "vs": {
00:12:00.318              "nvme_version": "1.3"
00:12:00.318            },
00:12:00.318            "ns_data": {
00:12:00.318              "id": 1,
00:12:00.318              "can_share": true
00:12:00.318            }
00:12:00.318          }
00:12:00.318        ],
00:12:00.318        "mp_policy": "active_passive"
00:12:00.318      }
00:12:00.318    }
00:12:00.318  ]
00:12:00.318   23:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2944888
00:12:00.318   23:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:12:00.318   23:53:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:12:00.318  Running I/O for 10 seconds...
00:12:01.722                                                                                                  Latency(us)
00:12:01.722  
[2024-12-09T22:53:17.579Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:01.722  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:01.722  	 Nvme0n1             :       1.00   23465.00      91.66       0.00     0.00       0.00       0.00       0.00
00:12:01.722  
[2024-12-09T22:53:17.579Z]  ===================================================================================================================
00:12:01.722  
[2024-12-09T22:53:17.579Z]  Total                       :              23465.00      91.66       0.00     0.00       0.00       0.00       0.00
00:12:01.722  
00:12:02.348   23:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1
00:12:02.348  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:02.348  	 Nvme0n1             :       2.00   23587.50      92.14       0.00     0.00       0.00       0.00       0.00
00:12:02.348  
[2024-12-09T22:53:18.205Z]  ===================================================================================================================
00:12:02.348  
[2024-12-09T22:53:18.205Z]  Total                       :              23587.50      92.14       0.00     0.00       0.00       0.00       0.00
00:12:02.348  
00:12:02.607  true
00:12:02.607    23:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1
00:12:02.607    23:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:12:02.867   23:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:12:02.867   23:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:12:02.867   23:53:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2944888
00:12:03.435  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:03.435  	 Nvme0n1             :       3.00   23670.00      92.46       0.00     0.00       0.00       0.00       0.00
00:12:03.435  
[2024-12-09T22:53:19.292Z]  ===================================================================================================================
00:12:03.435  
[2024-12-09T22:53:19.292Z]  Total                       :              23670.00      92.46       0.00     0.00       0.00       0.00       0.00
00:12:03.435  
00:12:04.373  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:04.373  	 Nvme0n1             :       4.00   23669.50      92.46       0.00     0.00       0.00       0.00       0.00
00:12:04.373  
[2024-12-09T22:53:20.230Z]  ===================================================================================================================
00:12:04.373  
[2024-12-09T22:53:20.230Z]  Total                       :              23669.50      92.46       0.00     0.00       0.00       0.00       0.00
00:12:04.373  
00:12:05.308  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:05.308  	 Nvme0n1             :       5.00   23688.60      92.53       0.00     0.00       0.00       0.00       0.00
00:12:05.308  
[2024-12-09T22:53:21.165Z]  ===================================================================================================================
00:12:05.308  
[2024-12-09T22:53:21.165Z]  Total                       :              23688.60      92.53       0.00     0.00       0.00       0.00       0.00
00:12:05.308  
00:12:06.702  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:06.702  	 Nvme0n1             :       6.00   23734.67      92.71       0.00     0.00       0.00       0.00       0.00
00:12:06.702  
[2024-12-09T22:53:22.559Z]  ===================================================================================================================
00:12:06.702  
[2024-12-09T22:53:22.559Z]  Total                       :              23734.67      92.71       0.00     0.00       0.00       0.00       0.00
00:12:06.702  
00:12:07.639  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:07.639  	 Nvme0n1             :       7.00   23775.00      92.87       0.00     0.00       0.00       0.00       0.00
00:12:07.639  
[2024-12-09T22:53:23.496Z]  ===================================================================================================================
00:12:07.639  
[2024-12-09T22:53:23.496Z]  Total                       :              23775.00      92.87       0.00     0.00       0.00       0.00       0.00
00:12:07.639  
00:12:08.575  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:08.575  	 Nvme0n1             :       8.00   23814.62      93.03       0.00     0.00       0.00       0.00       0.00
00:12:08.575  
[2024-12-09T22:53:24.432Z]  ===================================================================================================================
00:12:08.575  
[2024-12-09T22:53:24.432Z]  Total                       :              23814.62      93.03       0.00     0.00       0.00       0.00       0.00
00:12:08.575  
00:12:09.510  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:09.510  	 Nvme0n1             :       9.00   23849.11      93.16       0.00     0.00       0.00       0.00       0.00
00:12:09.510  
[2024-12-09T22:53:25.367Z]  ===================================================================================================================
00:12:09.510  
[2024-12-09T22:53:25.367Z]  Total                       :              23849.11      93.16       0.00     0.00       0.00       0.00       0.00
00:12:09.510  
00:12:10.447  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:10.447  	 Nvme0n1             :      10.00   23878.00      93.27       0.00     0.00       0.00       0.00       0.00
00:12:10.447  
[2024-12-09T22:53:26.304Z]  ===================================================================================================================
00:12:10.447  
[2024-12-09T22:53:26.304Z]  Total                       :              23878.00      93.27       0.00     0.00       0.00       0.00       0.00
00:12:10.447  
00:12:10.447  
00:12:10.447                                                                                                  Latency(us)
00:12:10.447  
[2024-12-09T22:53:26.304Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:10.447  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:10.447  	 Nvme0n1             :      10.00   23870.59      93.24       0.00     0.00    5358.69    2371.78   12795.12
00:12:10.447  
[2024-12-09T22:53:26.304Z]  ===================================================================================================================
00:12:10.447  
[2024-12-09T22:53:26.304Z]  Total                       :              23870.59      93.24       0.00     0.00    5358.69    2371.78   12795.12
00:12:10.447  {
00:12:10.447    "results": [
00:12:10.447      {
00:12:10.447        "job": "Nvme0n1",
00:12:10.447        "core_mask": "0x2",
00:12:10.447        "workload": "randwrite",
00:12:10.447        "status": "finished",
00:12:10.447        "queue_depth": 128,
00:12:10.447        "io_size": 4096,
00:12:10.447        "runtime": 10.003148,
00:12:10.447        "iops": 23870.585539672113,
00:12:10.447        "mibps": 93.24447476434419,
00:12:10.447        "io_failed": 0,
00:12:10.447        "io_timeout": 0,
00:12:10.447        "avg_latency_us": 5358.692645187331,
00:12:10.447        "min_latency_us": 2371.7790476190476,
00:12:10.447        "max_latency_us": 12795.12380952381
00:12:10.447      }
00:12:10.447    ],
00:12:10.447    "core_count": 1
00:12:10.447  }
00:12:10.447   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2944718
00:12:10.447   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2944718 ']'
00:12:10.447   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2944718
00:12:10.447    23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname
00:12:10.447   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:10.447    23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2944718
00:12:10.447   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:12:10.448   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:12:10.448   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2944718'
00:12:10.448  killing process with pid 2944718
00:12:10.448   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2944718
00:12:10.448  Received shutdown signal, test time was about 10.000000 seconds
00:12:10.448  
00:12:10.448                                                                                                  Latency(us)
00:12:10.448  
[2024-12-09T22:53:26.305Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:10.448  
[2024-12-09T22:53:26.305Z]  ===================================================================================================================
00:12:10.448  
[2024-12-09T22:53:26.305Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:12:10.448   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2944718
00:12:10.705   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:12:10.963   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:12:10.963    23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1
00:12:10.963    23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:12:11.221   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:12:11.221   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]]
00:12:11.221   23:53:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:12:11.480  [2024-12-09 23:53:27.154838] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:12:11.480   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1
00:12:11.480   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0
00:12:11.480   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1
00:12:11.480   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:11.480   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:11.481    23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:11.481   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:11.481    23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:11.481   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:11.481   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:11.481   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:12:11.481   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1
00:12:11.740  request:
00:12:11.740  {
00:12:11.740    "uuid": "27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1",
00:12:11.740    "method": "bdev_lvol_get_lvstores",
00:12:11.740    "req_id": 1
00:12:11.740  }
00:12:11.740  Got JSON-RPC error response
00:12:11.740  response:
00:12:11.740  {
00:12:11.740    "code": -19,
00:12:11.740    "message": "No such device"
00:12:11.740  }
00:12:11.740   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1
00:12:11.740   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:11.740   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:12:11.740   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:11.740   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:12:11.740  aio_bdev
00:12:11.740   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev be02b138-a75a-4119-b3c1-dd79308cbf54
00:12:11.740   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=be02b138-a75a-4119-b3c1-dd79308cbf54
00:12:11.740   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:12:11.740   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i
00:12:11.740   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:12:11.740   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:12:11.740   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:12:11.998   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b be02b138-a75a-4119-b3c1-dd79308cbf54 -t 2000
00:12:12.257  [
00:12:12.257    {
00:12:12.257      "name": "be02b138-a75a-4119-b3c1-dd79308cbf54",
00:12:12.257      "aliases": [
00:12:12.257        "lvs/lvol"
00:12:12.257      ],
00:12:12.257      "product_name": "Logical Volume",
00:12:12.257      "block_size": 4096,
00:12:12.257      "num_blocks": 38912,
00:12:12.257      "uuid": "be02b138-a75a-4119-b3c1-dd79308cbf54",
00:12:12.257      "assigned_rate_limits": {
00:12:12.257        "rw_ios_per_sec": 0,
00:12:12.257        "rw_mbytes_per_sec": 0,
00:12:12.257        "r_mbytes_per_sec": 0,
00:12:12.257        "w_mbytes_per_sec": 0
00:12:12.257      },
00:12:12.257      "claimed": false,
00:12:12.257      "zoned": false,
00:12:12.257      "supported_io_types": {
00:12:12.257        "read": true,
00:12:12.257        "write": true,
00:12:12.257        "unmap": true,
00:12:12.257        "flush": false,
00:12:12.257        "reset": true,
00:12:12.257        "nvme_admin": false,
00:12:12.257        "nvme_io": false,
00:12:12.257        "nvme_io_md": false,
00:12:12.257        "write_zeroes": true,
00:12:12.257        "zcopy": false,
00:12:12.257        "get_zone_info": false,
00:12:12.257        "zone_management": false,
00:12:12.257        "zone_append": false,
00:12:12.257        "compare": false,
00:12:12.257        "compare_and_write": false,
00:12:12.257        "abort": false,
00:12:12.257        "seek_hole": true,
00:12:12.257        "seek_data": true,
00:12:12.257        "copy": false,
00:12:12.257        "nvme_iov_md": false
00:12:12.257      },
00:12:12.257      "driver_specific": {
00:12:12.257        "lvol": {
00:12:12.257          "lvol_store_uuid": "27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1",
00:12:12.257          "base_bdev": "aio_bdev",
00:12:12.257          "thin_provision": false,
00:12:12.257          "num_allocated_clusters": 38,
00:12:12.257          "snapshot": false,
00:12:12.257          "clone": false,
00:12:12.257          "esnap_clone": false
00:12:12.257        }
00:12:12.257      }
00:12:12.257    }
00:12:12.257  ]
00:12:12.257   23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0
00:12:12.257    23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1
00:12:12.257    23:53:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:12:12.515   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:12:12.515    23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1
00:12:12.515    23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:12:12.515   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:12:12.515   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete be02b138-a75a-4119-b3c1-dd79308cbf54
00:12:12.774   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 27789687-d5b8-49cc-b4c8-9e9f7c4dc6e1
00:12:13.033   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:12:13.292   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:12:13.292  
00:12:13.292  real	0m15.577s
00:12:13.292  user	0m15.101s
00:12:13.292  sys	0m1.509s
00:12:13.292   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:13.292   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:12:13.292  ************************************
00:12:13.292  END TEST lvs_grow_clean
00:12:13.292  ************************************
00:12:13.292   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty
00:12:13.292   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:13.292   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:13.292   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:12:13.292  ************************************
00:12:13.292  START TEST lvs_grow_dirty
00:12:13.292  ************************************
00:12:13.292   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty
00:12:13.292   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:12:13.292   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:12:13.292   23:53:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:12:13.292   23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:12:13.292   23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:12:13.292   23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:12:13.292   23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:12:13.292   23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:12:13.292    23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:12:13.550   23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:12:13.550    23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:12:13.809   23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=291439a2-784b-4561-92f6-349593284d1e
00:12:13.809    23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 291439a2-784b-4561-92f6-349593284d1e
00:12:13.809    23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:12:13.809   23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:12:13.809   23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:12:13.809    23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 291439a2-784b-4561-92f6-349593284d1e lvol 150
00:12:14.068   23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=a3f5fc05-e6d7-4efe-8468-5723216a6ea9
00:12:14.068   23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:12:14.068   23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:12:14.326  [2024-12-09 23:53:29.959102] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:12:14.326  [2024-12-09 23:53:29.959152] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:12:14.326  true
00:12:14.327    23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 291439a2-784b-4561-92f6-349593284d1e
00:12:14.327    23:53:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:12:14.327   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:12:14.327   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:12:14.585   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a3f5fc05-e6d7-4efe-8468-5723216a6ea9
00:12:14.844   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:12:14.844  [2024-12-09 23:53:30.701318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:12:15.103   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:12:15.103   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:12:15.103   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2947419
00:12:15.103   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:12:15.103   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2947419 /var/tmp/bdevperf.sock
00:12:15.103   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2947419 ']'
00:12:15.103   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:12:15.103   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:15.103   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:12:15.103  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:12:15.103   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:15.103   23:53:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:12:15.103  [2024-12-09 23:53:30.921105] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:12:15.103  [2024-12-09 23:53:30.921150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2947419 ]
00:12:15.361  [2024-12-09 23:53:30.994989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:15.361  [2024-12-09 23:53:31.033742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:15.361   23:53:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:15.361   23:53:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:12:15.362   23:53:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:12:15.620  Nvme0n1
00:12:15.620   23:53:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:12:15.879  [
00:12:15.879    {
00:12:15.879      "name": "Nvme0n1",
00:12:15.879      "aliases": [
00:12:15.879        "a3f5fc05-e6d7-4efe-8468-5723216a6ea9"
00:12:15.879      ],
00:12:15.879      "product_name": "NVMe disk",
00:12:15.879      "block_size": 4096,
00:12:15.879      "num_blocks": 38912,
00:12:15.879      "uuid": "a3f5fc05-e6d7-4efe-8468-5723216a6ea9",
00:12:15.879      "numa_id": 1,
00:12:15.879      "assigned_rate_limits": {
00:12:15.879        "rw_ios_per_sec": 0,
00:12:15.879        "rw_mbytes_per_sec": 0,
00:12:15.879        "r_mbytes_per_sec": 0,
00:12:15.879        "w_mbytes_per_sec": 0
00:12:15.879      },
00:12:15.879      "claimed": false,
00:12:15.879      "zoned": false,
00:12:15.879      "supported_io_types": {
00:12:15.879        "read": true,
00:12:15.879        "write": true,
00:12:15.879        "unmap": true,
00:12:15.879        "flush": true,
00:12:15.879        "reset": true,
00:12:15.879        "nvme_admin": true,
00:12:15.879        "nvme_io": true,
00:12:15.879        "nvme_io_md": false,
00:12:15.879        "write_zeroes": true,
00:12:15.879        "zcopy": false,
00:12:15.879        "get_zone_info": false,
00:12:15.879        "zone_management": false,
00:12:15.879        "zone_append": false,
00:12:15.879        "compare": true,
00:12:15.879        "compare_and_write": true,
00:12:15.879        "abort": true,
00:12:15.879        "seek_hole": false,
00:12:15.879        "seek_data": false,
00:12:15.879        "copy": true,
00:12:15.879        "nvme_iov_md": false
00:12:15.879      },
00:12:15.879      "memory_domains": [
00:12:15.879        {
00:12:15.879          "dma_device_id": "system",
00:12:15.879          "dma_device_type": 1
00:12:15.879        }
00:12:15.879      ],
00:12:15.879      "driver_specific": {
00:12:15.879        "nvme": [
00:12:15.879          {
00:12:15.879            "trid": {
00:12:15.879              "trtype": "TCP",
00:12:15.879              "adrfam": "IPv4",
00:12:15.879              "traddr": "10.0.0.2",
00:12:15.879              "trsvcid": "4420",
00:12:15.879              "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:12:15.879            },
00:12:15.879            "ctrlr_data": {
00:12:15.879              "cntlid": 1,
00:12:15.879              "vendor_id": "0x8086",
00:12:15.879              "model_number": "SPDK bdev Controller",
00:12:15.879              "serial_number": "SPDK0",
00:12:15.879              "firmware_revision": "25.01",
00:12:15.879              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:12:15.879              "oacs": {
00:12:15.879                "security": 0,
00:12:15.879                "format": 0,
00:12:15.879                "firmware": 0,
00:12:15.879                "ns_manage": 0
00:12:15.879              },
00:12:15.879              "multi_ctrlr": true,
00:12:15.879              "ana_reporting": false
00:12:15.879            },
00:12:15.879            "vs": {
00:12:15.879              "nvme_version": "1.3"
00:12:15.879            },
00:12:15.879            "ns_data": {
00:12:15.879              "id": 1,
00:12:15.879              "can_share": true
00:12:15.879            }
00:12:15.879          }
00:12:15.879        ],
00:12:15.879        "mp_policy": "active_passive"
00:12:15.879      }
00:12:15.879    }
00:12:15.879  ]
00:12:15.879   23:53:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2947520
00:12:15.879   23:53:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:12:15.879   23:53:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:12:15.879  Running I/O for 10 seconds...
00:12:17.255                                                                                                  Latency(us)
00:12:17.255  
[2024-12-09T22:53:33.112Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:17.256  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:17.256  	 Nvme0n1             :       1.00   23654.00      92.40       0.00     0.00       0.00       0.00       0.00
00:12:17.256  
[2024-12-09T22:53:33.113Z]  ===================================================================================================================
00:12:17.256  
[2024-12-09T22:53:33.113Z]  Total                       :              23654.00      92.40       0.00     0.00       0.00       0.00       0.00
00:12:17.256  
00:12:17.822   23:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 291439a2-784b-4561-92f6-349593284d1e
00:12:18.081  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:18.081  	 Nvme0n1             :       2.00   23741.00      92.74       0.00     0.00       0.00       0.00       0.00
00:12:18.081  
[2024-12-09T22:53:33.938Z]  ===================================================================================================================
00:12:18.081  
[2024-12-09T22:53:33.938Z]  Total                       :              23741.00      92.74       0.00     0.00       0.00       0.00       0.00
00:12:18.081  
00:12:18.081  true
00:12:18.081    23:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 291439a2-784b-4561-92f6-349593284d1e
00:12:18.081    23:53:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:12:18.340   23:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:12:18.340   23:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:12:18.340   23:53:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2947520
00:12:18.907  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:18.907  	 Nvme0n1             :       3.00   23787.00      92.92       0.00     0.00       0.00       0.00       0.00
00:12:18.907  
[2024-12-09T22:53:34.764Z]  ===================================================================================================================
00:12:18.907  
[2024-12-09T22:53:34.764Z]  Total                       :              23787.00      92.92       0.00     0.00       0.00       0.00       0.00
00:12:18.907  
00:12:20.283  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:20.283  	 Nvme0n1             :       4.00   23843.25      93.14       0.00     0.00       0.00       0.00       0.00
00:12:20.283  
[2024-12-09T22:53:36.140Z]  ===================================================================================================================
00:12:20.283  
[2024-12-09T22:53:36.140Z]  Total                       :              23843.25      93.14       0.00     0.00       0.00       0.00       0.00
00:12:20.283  
00:12:20.850  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:20.850  	 Nvme0n1             :       5.00   23877.20      93.27       0.00     0.00       0.00       0.00       0.00
00:12:20.850  
[2024-12-09T22:53:36.707Z]  ===================================================================================================================
00:12:20.850  
[2024-12-09T22:53:36.707Z]  Total                       :              23877.20      93.27       0.00     0.00       0.00       0.00       0.00
00:12:20.850  
00:12:22.226  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:22.226  	 Nvme0n1             :       6.00   23903.33      93.37       0.00     0.00       0.00       0.00       0.00
00:12:22.226  
[2024-12-09T22:53:38.083Z]  ===================================================================================================================
00:12:22.226  
[2024-12-09T22:53:38.083Z]  Total                       :              23903.33      93.37       0.00     0.00       0.00       0.00       0.00
00:12:22.226  
00:12:23.163  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:23.163  	 Nvme0n1             :       7.00   23912.14      93.41       0.00     0.00       0.00       0.00       0.00
00:12:23.163  
[2024-12-09T22:53:39.020Z]  ===================================================================================================================
00:12:23.163  
[2024-12-09T22:53:39.020Z]  Total                       :              23912.14      93.41       0.00     0.00       0.00       0.00       0.00
00:12:23.163  
00:12:24.098  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:24.098  	 Nvme0n1             :       8.00   23939.88      93.52       0.00     0.00       0.00       0.00       0.00
00:12:24.098  
[2024-12-09T22:53:39.955Z]  ===================================================================================================================
00:12:24.098  
[2024-12-09T22:53:39.955Z]  Total                       :              23939.88      93.52       0.00     0.00       0.00       0.00       0.00
00:12:24.098  
00:12:25.034  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:25.034  	 Nvme0n1             :       9.00   23894.44      93.34       0.00     0.00       0.00       0.00       0.00
00:12:25.034  
[2024-12-09T22:53:40.891Z]  ===================================================================================================================
00:12:25.034  
[2024-12-09T22:53:40.891Z]  Total                       :              23894.44      93.34       0.00     0.00       0.00       0.00       0.00
00:12:25.034  
00:12:25.971  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:25.971  	 Nvme0n1             :      10.00   23909.70      93.40       0.00     0.00       0.00       0.00       0.00
00:12:25.971  
[2024-12-09T22:53:41.828Z]  ===================================================================================================================
00:12:25.971  
[2024-12-09T22:53:41.828Z]  Total                       :              23909.70      93.40       0.00     0.00       0.00       0.00       0.00
00:12:25.971  
00:12:25.971  
00:12:25.971                                                                                                  Latency(us)
00:12:25.971  
[2024-12-09T22:53:41.828Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:25.971  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:12:25.971  	 Nvme0n1             :      10.00   23912.10      93.41       0.00     0.00    5350.09    1466.76   11234.74
00:12:25.971  
[2024-12-09T22:53:41.828Z]  ===================================================================================================================
00:12:25.971  
[2024-12-09T22:53:41.828Z]  Total                       :              23912.10      93.41       0.00     0.00    5350.09    1466.76   11234.74
00:12:25.971  {
00:12:25.971    "results": [
00:12:25.971      {
00:12:25.971        "job": "Nvme0n1",
00:12:25.971        "core_mask": "0x2",
00:12:25.971        "workload": "randwrite",
00:12:25.971        "status": "finished",
00:12:25.971        "queue_depth": 128,
00:12:25.971        "io_size": 4096,
00:12:25.971        "runtime": 10.004348,
00:12:25.971        "iops": 23912.103017607944,
00:12:25.971        "mibps": 93.40665241253103,
00:12:25.971        "io_failed": 0,
00:12:25.971        "io_timeout": 0,
00:12:25.971        "avg_latency_us": 5350.090026344994,
00:12:25.971        "min_latency_us": 1466.7580952380952,
00:12:25.971        "max_latency_us": 11234.742857142857
00:12:25.971      }
00:12:25.971    ],
00:12:25.971    "core_count": 1
00:12:25.971  }
00:12:25.971   23:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2947419
00:12:25.971   23:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2947419 ']'
00:12:25.971   23:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2947419
00:12:25.971    23:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname
00:12:25.971   23:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:25.971    23:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2947419
00:12:25.971   23:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:12:25.971   23:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:12:25.971   23:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2947419'
00:12:25.971  killing process with pid 2947419
00:12:25.971   23:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2947419
00:12:25.971  Received shutdown signal, test time was about 10.000000 seconds
00:12:25.971  
00:12:25.971                                                                                                  Latency(us)
00:12:25.971  
[2024-12-09T22:53:41.828Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:25.971  
[2024-12-09T22:53:41.828Z]  ===================================================================================================================
00:12:25.971  
[2024-12-09T22:53:41.828Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:12:25.971   23:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2947419
00:12:26.229   23:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:12:26.488   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:12:26.747    23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 291439a2-784b-4561-92f6-349593284d1e
00:12:26.747    23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]]
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2944385
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2944385
00:12:26.747  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2944385 Killed                  "${NVMF_APP[@]}" "$@"
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2949400
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2949400
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2949400 ']'
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:26.747  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:26.747   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:12:27.004  [2024-12-09 23:53:42.621301] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:12:27.004  [2024-12-09 23:53:42.621346] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:27.004  [2024-12-09 23:53:42.698318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:27.004  [2024-12-09 23:53:42.737209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:27.004  [2024-12-09 23:53:42.737245] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:27.004  [2024-12-09 23:53:42.737252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:12:27.004  [2024-12-09 23:53:42.737258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:12:27.004  [2024-12-09 23:53:42.737263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:27.004  [2024-12-09 23:53:42.737791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:27.004   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:27.004   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:12:27.004   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:12:27.004   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:27.004   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:12:27.263   23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:12:27.263    23:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:12:27.263  [2024-12-09 23:53:43.043790] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore
00:12:27.263  [2024-12-09 23:53:43.043882] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0
00:12:27.263  [2024-12-09 23:53:43.043908] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1
00:12:27.263   23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev
00:12:27.263   23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev a3f5fc05-e6d7-4efe-8468-5723216a6ea9
00:12:27.263   23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a3f5fc05-e6d7-4efe-8468-5723216a6ea9
00:12:27.263   23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:12:27.263   23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:12:27.263   23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:12:27.263   23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:12:27.263   23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:12:27.521   23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a3f5fc05-e6d7-4efe-8468-5723216a6ea9 -t 2000
00:12:27.779  [
00:12:27.779    {
00:12:27.779      "name": "a3f5fc05-e6d7-4efe-8468-5723216a6ea9",
00:12:27.779      "aliases": [
00:12:27.779        "lvs/lvol"
00:12:27.779      ],
00:12:27.779      "product_name": "Logical Volume",
00:12:27.779      "block_size": 4096,
00:12:27.779      "num_blocks": 38912,
00:12:27.779      "uuid": "a3f5fc05-e6d7-4efe-8468-5723216a6ea9",
00:12:27.779      "assigned_rate_limits": {
00:12:27.779        "rw_ios_per_sec": 0,
00:12:27.779        "rw_mbytes_per_sec": 0,
00:12:27.779        "r_mbytes_per_sec": 0,
00:12:27.779        "w_mbytes_per_sec": 0
00:12:27.779      },
00:12:27.779      "claimed": false,
00:12:27.779      "zoned": false,
00:12:27.779      "supported_io_types": {
00:12:27.779        "read": true,
00:12:27.779        "write": true,
00:12:27.779        "unmap": true,
00:12:27.779        "flush": false,
00:12:27.779        "reset": true,
00:12:27.779        "nvme_admin": false,
00:12:27.779        "nvme_io": false,
00:12:27.779        "nvme_io_md": false,
00:12:27.779        "write_zeroes": true,
00:12:27.779        "zcopy": false,
00:12:27.779        "get_zone_info": false,
00:12:27.779        "zone_management": false,
00:12:27.779        "zone_append": false,
00:12:27.779        "compare": false,
00:12:27.779        "compare_and_write": false,
00:12:27.779        "abort": false,
00:12:27.779        "seek_hole": true,
00:12:27.779        "seek_data": true,
00:12:27.779        "copy": false,
00:12:27.779        "nvme_iov_md": false
00:12:27.779      },
00:12:27.779      "driver_specific": {
00:12:27.779        "lvol": {
00:12:27.779          "lvol_store_uuid": "291439a2-784b-4561-92f6-349593284d1e",
00:12:27.779          "base_bdev": "aio_bdev",
00:12:27.779          "thin_provision": false,
00:12:27.779          "num_allocated_clusters": 38,
00:12:27.779          "snapshot": false,
00:12:27.779          "clone": false,
00:12:27.779          "esnap_clone": false
00:12:27.779        }
00:12:27.779      }
00:12:27.779    }
00:12:27.779  ]
00:12:27.779   23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:12:27.779    23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 291439a2-784b-4561-92f6-349593284d1e
00:12:27.779    23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters'
00:12:27.779   23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 ))
00:12:27.780    23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 291439a2-784b-4561-92f6-349593284d1e
00:12:27.780    23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters'
00:12:28.038   23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 ))
00:12:28.038   23:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:12:28.297  [2024-12-09 23:53:44.020876] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:12:28.297   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 291439a2-784b-4561-92f6-349593284d1e
00:12:28.297   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0
00:12:28.297   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 291439a2-784b-4561-92f6-349593284d1e
00:12:28.297   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:28.297   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:28.297    23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:28.297   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:28.297    23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:28.297   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:28.297   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:12:28.297   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:12:28.297   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 291439a2-784b-4561-92f6-349593284d1e
00:12:28.556  request:
00:12:28.556  {
00:12:28.556    "uuid": "291439a2-784b-4561-92f6-349593284d1e",
00:12:28.556    "method": "bdev_lvol_get_lvstores",
00:12:28.556    "req_id": 1
00:12:28.556  }
00:12:28.556  Got JSON-RPC error response
00:12:28.556  response:
00:12:28.556  {
00:12:28.556    "code": -19,
00:12:28.556    "message": "No such device"
00:12:28.556  }
00:12:28.556   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1
00:12:28.556   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:28.556   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:12:28.556   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:28.556   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:12:28.879  aio_bdev
00:12:28.879   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a3f5fc05-e6d7-4efe-8468-5723216a6ea9
00:12:28.879   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=a3f5fc05-e6d7-4efe-8468-5723216a6ea9
00:12:28.879   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:12:28.879   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:12:28.879   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:12:28.879   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:12:28.879   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:12:28.879   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a3f5fc05-e6d7-4efe-8468-5723216a6ea9 -t 2000
00:12:29.141  [
00:12:29.141    {
00:12:29.141      "name": "a3f5fc05-e6d7-4efe-8468-5723216a6ea9",
00:12:29.141      "aliases": [
00:12:29.141        "lvs/lvol"
00:12:29.141      ],
00:12:29.141      "product_name": "Logical Volume",
00:12:29.141      "block_size": 4096,
00:12:29.141      "num_blocks": 38912,
00:12:29.141      "uuid": "a3f5fc05-e6d7-4efe-8468-5723216a6ea9",
00:12:29.141      "assigned_rate_limits": {
00:12:29.141        "rw_ios_per_sec": 0,
00:12:29.141        "rw_mbytes_per_sec": 0,
00:12:29.141        "r_mbytes_per_sec": 0,
00:12:29.141        "w_mbytes_per_sec": 0
00:12:29.141      },
00:12:29.141      "claimed": false,
00:12:29.141      "zoned": false,
00:12:29.141      "supported_io_types": {
00:12:29.141        "read": true,
00:12:29.141        "write": true,
00:12:29.141        "unmap": true,
00:12:29.141        "flush": false,
00:12:29.141        "reset": true,
00:12:29.141        "nvme_admin": false,
00:12:29.141        "nvme_io": false,
00:12:29.141        "nvme_io_md": false,
00:12:29.141        "write_zeroes": true,
00:12:29.141        "zcopy": false,
00:12:29.141        "get_zone_info": false,
00:12:29.141        "zone_management": false,
00:12:29.141        "zone_append": false,
00:12:29.141        "compare": false,
00:12:29.141        "compare_and_write": false,
00:12:29.141        "abort": false,
00:12:29.141        "seek_hole": true,
00:12:29.141        "seek_data": true,
00:12:29.141        "copy": false,
00:12:29.141        "nvme_iov_md": false
00:12:29.141      },
00:12:29.141      "driver_specific": {
00:12:29.141        "lvol": {
00:12:29.141          "lvol_store_uuid": "291439a2-784b-4561-92f6-349593284d1e",
00:12:29.141          "base_bdev": "aio_bdev",
00:12:29.141          "thin_provision": false,
00:12:29.141          "num_allocated_clusters": 38,
00:12:29.141          "snapshot": false,
00:12:29.141          "clone": false,
00:12:29.141          "esnap_clone": false
00:12:29.141        }
00:12:29.141      }
00:12:29.141    }
00:12:29.141  ]
00:12:29.141   23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:12:29.141    23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 291439a2-784b-4561-92f6-349593284d1e
00:12:29.141    23:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:12:29.400   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:12:29.400    23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 291439a2-784b-4561-92f6-349593284d1e
00:12:29.400    23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:12:29.400   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:12:29.400   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a3f5fc05-e6d7-4efe-8468-5723216a6ea9
00:12:29.658   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 291439a2-784b-4561-92f6-349593284d1e
00:12:29.917   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:12:30.176  
00:12:30.176  real	0m16.811s
00:12:30.176  user	0m43.571s
00:12:30.176  sys	0m3.687s
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:12:30.176  ************************************
00:12:30.176  END TEST lvs_grow_dirty
00:12:30.176  ************************************
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:12:30.176    23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:12:30.176  nvmf_trace.0
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20}
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:12:30.176  rmmod nvme_tcp
00:12:30.176  rmmod nvme_fabrics
00:12:30.176  rmmod nvme_keyring
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2949400 ']'
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2949400
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2949400 ']'
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2949400
00:12:30.176    23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname
00:12:30.176   23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:30.176    23:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2949400
00:12:30.176   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:30.176   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:30.176   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2949400'
00:12:30.177  killing process with pid 2949400
00:12:30.177   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2949400
00:12:30.177   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2949400
00:12:30.436   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:12:30.436   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:12:30.436   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:12:30.436   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr
00:12:30.436   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save
00:12:30.436   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:12:30.436   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore
00:12:30.436   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:12:30.436   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns
00:12:30.436   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:30.436   23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:30.436    23:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:32.988   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:12:32.988  
00:12:32.988  real	0m41.728s
00:12:32.988  user	1m4.342s
00:12:32.988  sys	0m10.106s
00:12:32.988   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:32.988   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:12:32.988  ************************************
00:12:32.988  END TEST nvmf_lvs_grow
00:12:32.988  ************************************
00:12:32.989   23:53:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp
00:12:32.989   23:53:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:32.989   23:53:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:32.989   23:53:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:12:32.989  ************************************
00:12:32.989  START TEST nvmf_bdev_io_wait
00:12:32.989  ************************************
00:12:32.989   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp
00:12:32.989  * Looking for test storage...
00:12:32.989  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-:
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-:
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<'
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:32.989  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:32.989  		--rc genhtml_branch_coverage=1
00:12:32.989  		--rc genhtml_function_coverage=1
00:12:32.989  		--rc genhtml_legend=1
00:12:32.989  		--rc geninfo_all_blocks=1
00:12:32.989  		--rc geninfo_unexecuted_blocks=1
00:12:32.989  		
00:12:32.989  		'
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:32.989  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:32.989  		--rc genhtml_branch_coverage=1
00:12:32.989  		--rc genhtml_function_coverage=1
00:12:32.989  		--rc genhtml_legend=1
00:12:32.989  		--rc geninfo_all_blocks=1
00:12:32.989  		--rc geninfo_unexecuted_blocks=1
00:12:32.989  		
00:12:32.989  		'
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:32.989  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:32.989  		--rc genhtml_branch_coverage=1
00:12:32.989  		--rc genhtml_function_coverage=1
00:12:32.989  		--rc genhtml_legend=1
00:12:32.989  		--rc geninfo_all_blocks=1
00:12:32.989  		--rc geninfo_unexecuted_blocks=1
00:12:32.989  		
00:12:32.989  		'
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:32.989  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:32.989  		--rc genhtml_branch_coverage=1
00:12:32.989  		--rc genhtml_function_coverage=1
00:12:32.989  		--rc genhtml_legend=1
00:12:32.989  		--rc geninfo_all_blocks=1
00:12:32.989  		--rc geninfo_unexecuted_blocks=1
00:12:32.989  		
00:12:32.989  		'
00:12:32.989   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:32.989     23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:32.989      23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:32.989      23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:32.989      23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:32.989      23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH
00:12:32.989      23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:12:32.989  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:12:32.989    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0
00:12:32.989   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64
00:12:32.990   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:12:32.990   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit
00:12:32.990   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:12:32.990   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:12:32.990   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs
00:12:32.990   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no
00:12:32.990   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns
00:12:32.990   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:32.990   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:32.990    23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:32.990   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:12:32.990   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:12:32.990   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable
00:12:32.990   23:53:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:12:39.565   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=()
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=()
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=()
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=()
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=()
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=()
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=()
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:12:39.566  Found 0000:af:00.0 (0x8086 - 0x159b)
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:12:39.566  Found 0000:af:00.1 (0x8086 - 0x159b)
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:12:39.566  Found net devices under 0000:af:00.0: cvl_0_0
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:12:39.566  Found net devices under 0000:af:00.1: cvl_0_1
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:12:39.566  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:12:39.566  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms
00:12:39.566  
00:12:39.566  --- 10.0.0.2 ping statistics ---
00:12:39.566  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:39.566  rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:12:39.566  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:12:39.566  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms
00:12:39.566  
00:12:39.566  --- 10.0.0.1 ping statistics ---
00:12:39.566  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:39.566  rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms
00:12:39.566   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2953433
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2953433
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2953433 ']'
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:39.567  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:39.567   23:53:54 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:12:39.567  [2024-12-09 23:53:54.526154] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:12:39.567  [2024-12-09 23:53:54.526216] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:39.567  [2024-12-09 23:53:54.603984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:12:39.567  [2024-12-09 23:53:54.647366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:39.567  [2024-12-09 23:53:54.647402] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:39.567  [2024-12-09 23:53:54.647409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:12:39.567  [2024-12-09 23:53:54.647417] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:12:39.567  [2024-12-09 23:53:54.647422] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:39.567  [2024-12-09 23:53:54.648830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:39.567  [2024-12-09 23:53:54.648936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:39.567  [2024-12-09 23:53:54.649019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:39.567  [2024-12-09 23:53:54.649020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:12:39.567   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:39.567   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0
00:12:39.567   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:12:39.567   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:39.567   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:12:39.567   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:12:39.567   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1
00:12:39.567   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:39.567   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:12:39.567   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:39.567   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init
00:12:39.567   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:39.567   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:12:39.827   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:39.827   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:12:39.827   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:39.827   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:12:39.827  [2024-12-09 23:53:55.470896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:12:39.827   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:12:39.828  Malloc0
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:12:39.828  [2024-12-09 23:53:55.526131] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2953677
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2953679
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:12:39.828  {
00:12:39.828    "params": {
00:12:39.828      "name": "Nvme$subsystem",
00:12:39.828      "trtype": "$TEST_TRANSPORT",
00:12:39.828      "traddr": "$NVMF_FIRST_TARGET_IP",
00:12:39.828      "adrfam": "ipv4",
00:12:39.828      "trsvcid": "$NVMF_PORT",
00:12:39.828      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:12:39.828      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:12:39.828      "hdgst": ${hdgst:-false},
00:12:39.828      "ddgst": ${ddgst:-false}
00:12:39.828    },
00:12:39.828    "method": "bdev_nvme_attach_controller"
00:12:39.828  }
00:12:39.828  EOF
00:12:39.828  )")
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2953681
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:12:39.828  {
00:12:39.828    "params": {
00:12:39.828      "name": "Nvme$subsystem",
00:12:39.828      "trtype": "$TEST_TRANSPORT",
00:12:39.828      "traddr": "$NVMF_FIRST_TARGET_IP",
00:12:39.828      "adrfam": "ipv4",
00:12:39.828      "trsvcid": "$NVMF_PORT",
00:12:39.828      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:12:39.828      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:12:39.828      "hdgst": ${hdgst:-false},
00:12:39.828      "ddgst": ${ddgst:-false}
00:12:39.828    },
00:12:39.828    "method": "bdev_nvme_attach_controller"
00:12:39.828  }
00:12:39.828  EOF
00:12:39.828  )")
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2953684
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync
00:12:39.828     23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:12:39.828  {
00:12:39.828    "params": {
00:12:39.828      "name": "Nvme$subsystem",
00:12:39.828      "trtype": "$TEST_TRANSPORT",
00:12:39.828      "traddr": "$NVMF_FIRST_TARGET_IP",
00:12:39.828      "adrfam": "ipv4",
00:12:39.828      "trsvcid": "$NVMF_PORT",
00:12:39.828      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:12:39.828      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:12:39.828      "hdgst": ${hdgst:-false},
00:12:39.828      "ddgst": ${ddgst:-false}
00:12:39.828    },
00:12:39.828    "method": "bdev_nvme_attach_controller"
00:12:39.828  }
00:12:39.828  EOF
00:12:39.828  )")
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:12:39.828     23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:12:39.828  {
00:12:39.828    "params": {
00:12:39.828      "name": "Nvme$subsystem",
00:12:39.828      "trtype": "$TEST_TRANSPORT",
00:12:39.828      "traddr": "$NVMF_FIRST_TARGET_IP",
00:12:39.828      "adrfam": "ipv4",
00:12:39.828      "trsvcid": "$NVMF_PORT",
00:12:39.828      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:12:39.828      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:12:39.828      "hdgst": ${hdgst:-false},
00:12:39.828      "ddgst": ${ddgst:-false}
00:12:39.828    },
00:12:39.828    "method": "bdev_nvme_attach_controller"
00:12:39.828  }
00:12:39.828  EOF
00:12:39.828  )")
00:12:39.828     23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:12:39.828   23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2953677
00:12:39.828     23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:12:39.828     23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:12:39.828     23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:12:39.828    "params": {
00:12:39.828      "name": "Nvme1",
00:12:39.828      "trtype": "tcp",
00:12:39.828      "traddr": "10.0.0.2",
00:12:39.828      "adrfam": "ipv4",
00:12:39.828      "trsvcid": "4420",
00:12:39.828      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:12:39.828      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:12:39.828      "hdgst": false,
00:12:39.828      "ddgst": false
00:12:39.828    },
00:12:39.828    "method": "bdev_nvme_attach_controller"
00:12:39.828  }'
00:12:39.828    23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:12:39.828     23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:12:39.828     23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:12:39.828    "params": {
00:12:39.828      "name": "Nvme1",
00:12:39.828      "trtype": "tcp",
00:12:39.828      "traddr": "10.0.0.2",
00:12:39.828      "adrfam": "ipv4",
00:12:39.828      "trsvcid": "4420",
00:12:39.828      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:12:39.828      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:12:39.828      "hdgst": false,
00:12:39.828      "ddgst": false
00:12:39.828    },
00:12:39.828    "method": "bdev_nvme_attach_controller"
00:12:39.828  }'
00:12:39.828     23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:12:39.828     23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:12:39.828    "params": {
00:12:39.828      "name": "Nvme1",
00:12:39.828      "trtype": "tcp",
00:12:39.828      "traddr": "10.0.0.2",
00:12:39.828      "adrfam": "ipv4",
00:12:39.828      "trsvcid": "4420",
00:12:39.828      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:12:39.828      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:12:39.828      "hdgst": false,
00:12:39.828      "ddgst": false
00:12:39.828    },
00:12:39.828    "method": "bdev_nvme_attach_controller"
00:12:39.828  }'
00:12:39.828     23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:12:39.828     23:53:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:12:39.828    "params": {
00:12:39.828      "name": "Nvme1",
00:12:39.828      "trtype": "tcp",
00:12:39.828      "traddr": "10.0.0.2",
00:12:39.828      "adrfam": "ipv4",
00:12:39.828      "trsvcid": "4420",
00:12:39.829      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:12:39.829      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:12:39.829      "hdgst": false,
00:12:39.829      "ddgst": false
00:12:39.829    },
00:12:39.829    "method": "bdev_nvme_attach_controller"
00:12:39.829  }'
00:12:39.829  [2024-12-09 23:53:55.575182] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:12:39.829  [2024-12-09 23:53:55.575228] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ]
00:12:39.829  [2024-12-09 23:53:55.578657] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:12:39.829  [2024-12-09 23:53:55.578698] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ]
00:12:39.829  [2024-12-09 23:53:55.579472] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:12:39.829  [2024-12-09 23:53:55.579512] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ]
00:12:39.829  [2024-12-09 23:53:55.581565] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:12:39.829  [2024-12-09 23:53:55.581604] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ]
00:12:40.086  [2024-12-09 23:53:55.752303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:40.087  [2024-12-09 23:53:55.797376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:12:40.087  [2024-12-09 23:53:55.842729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:40.087  [2024-12-09 23:53:55.887714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7
00:12:40.087  [2024-12-09 23:53:55.938361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:40.345  [2024-12-09 23:53:55.995014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:12:40.345  [2024-12-09 23:53:55.995150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:40.345  [2024-12-09 23:53:56.037517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:12:40.345  Running I/O for 1 seconds...
00:12:40.603  Running I/O for 1 seconds...
00:12:40.603  Running I/O for 1 seconds...
00:12:40.603  Running I/O for 1 seconds...
00:12:41.538       8820.00 IOPS,    34.45 MiB/s
00:12:41.538                                                                                                  Latency(us)
00:12:41.538  
[2024-12-09T22:53:57.395Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:41.538  Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096)
00:12:41.538  	 Nvme1n1             :       1.01    8817.53      34.44       0.00     0.00   14359.15    6491.18   24716.43
00:12:41.538  
[2024-12-09T22:53:57.395Z]  ===================================================================================================================
00:12:41.538  
[2024-12-09T22:53:57.395Z]  Total                       :               8817.53      34.44       0.00     0.00   14359.15    6491.18   24716.43
00:12:41.538     241872.00 IOPS,   944.81 MiB/s
00:12:41.538                                                                                                  Latency(us)
00:12:41.538  
[2024-12-09T22:53:57.395Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:41.538  Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096)
00:12:41.538  	 Nvme1n1             :       1.00  241511.94     943.41       0.00     0.00     526.85     221.38    1490.16
00:12:41.538  
[2024-12-09T22:53:57.395Z]  ===================================================================================================================
00:12:41.538  
[2024-12-09T22:53:57.396Z]  Total                       :             241511.94     943.41       0.00     0.00     526.85     221.38    1490.16
00:12:41.539       8258.00 IOPS,    32.26 MiB/s
[2024-12-09T22:53:57.396Z]  23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2953679
00:12:41.539  
00:12:41.539                                                                                                  Latency(us)
00:12:41.539  
[2024-12-09T22:53:57.396Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:41.539  Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096)
00:12:41.539  	 Nvme1n1             :       1.01    8357.86      32.65       0.00     0.00   15275.02    3620.08   28711.01
00:12:41.539  
[2024-12-09T22:53:57.396Z]  ===================================================================================================================
00:12:41.539  
[2024-12-09T22:53:57.396Z]  Total                       :               8357.86      32.65       0.00     0.00   15275.02    3620.08   28711.01
00:12:41.539      10458.00 IOPS,    40.85 MiB/s
00:12:41.539                                                                                                  Latency(us)
00:12:41.539  
[2024-12-09T22:53:57.396Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:41.539  Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096)
00:12:41.539  	 Nvme1n1             :       1.01   10522.88      41.10       0.00     0.00   12122.80    5149.26   21970.16
00:12:41.539  
[2024-12-09T22:53:57.396Z]  ===================================================================================================================
00:12:41.539  
[2024-12-09T22:53:57.396Z]  Total                       :              10522.88      41.10       0.00     0.00   12122.80    5149.26   21970.16
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2953681
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2953684
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20}
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:12:41.798  rmmod nvme_tcp
00:12:41.798  rmmod nvme_fabrics
00:12:41.798  rmmod nvme_keyring
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2953433 ']'
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2953433
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2953433 ']'
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2953433
00:12:41.798    23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:41.798    23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2953433
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2953433'
00:12:41.798  killing process with pid 2953433
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2953433
00:12:41.798   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2953433
00:12:42.080   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:12:42.080   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:12:42.080   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:12:42.080   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr
00:12:42.080   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save
00:12:42.080   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:12:42.080   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore
00:12:42.080   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:12:42.080   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns
00:12:42.080   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:42.080   23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:42.080    23:53:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:43.987   23:53:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:12:43.987  
00:12:43.987  real	0m11.431s
00:12:43.987  user	0m19.191s
00:12:43.987  sys	0m6.168s
00:12:43.987   23:53:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:43.987   23:53:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:12:43.987  ************************************
00:12:43.987  END TEST nvmf_bdev_io_wait
00:12:43.987  ************************************
00:12:43.987   23:53:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp
00:12:43.987   23:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:43.987   23:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:43.987   23:53:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:12:43.987  ************************************
00:12:43.987  START TEST nvmf_queue_depth
00:12:43.987  ************************************
00:12:43.987   23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp
00:12:44.247  * Looking for test storage...
00:12:44.247  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:12:44.247    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:12:44.247     23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version
00:12:44.247     23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:12:44.247    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:12:44.247    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:44.247    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:44.247    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:44.247    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-:
00:12:44.247    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1
00:12:44.247    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-:
00:12:44.247    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2
00:12:44.247    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<'
00:12:44.247    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2
00:12:44.247    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1
00:12:44.248    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:44.248    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in
00:12:44.248    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1
00:12:44.248    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:44.248    23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:44.248     23:53:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1
00:12:44.248     23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1
00:12:44.248     23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:44.248     23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1
00:12:44.248     23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2
00:12:44.248     23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2
00:12:44.248     23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:44.248     23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:12:44.248  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:44.248  		--rc genhtml_branch_coverage=1
00:12:44.248  		--rc genhtml_function_coverage=1
00:12:44.248  		--rc genhtml_legend=1
00:12:44.248  		--rc geninfo_all_blocks=1
00:12:44.248  		--rc geninfo_unexecuted_blocks=1
00:12:44.248  		
00:12:44.248  		'
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:12:44.248  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:44.248  		--rc genhtml_branch_coverage=1
00:12:44.248  		--rc genhtml_function_coverage=1
00:12:44.248  		--rc genhtml_legend=1
00:12:44.248  		--rc geninfo_all_blocks=1
00:12:44.248  		--rc geninfo_unexecuted_blocks=1
00:12:44.248  		
00:12:44.248  		'
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:12:44.248  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:44.248  		--rc genhtml_branch_coverage=1
00:12:44.248  		--rc genhtml_function_coverage=1
00:12:44.248  		--rc genhtml_legend=1
00:12:44.248  		--rc geninfo_all_blocks=1
00:12:44.248  		--rc geninfo_unexecuted_blocks=1
00:12:44.248  		
00:12:44.248  		'
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:12:44.248  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:44.248  		--rc genhtml_branch_coverage=1
00:12:44.248  		--rc genhtml_function_coverage=1
00:12:44.248  		--rc genhtml_legend=1
00:12:44.248  		--rc geninfo_all_blocks=1
00:12:44.248  		--rc geninfo_unexecuted_blocks=1
00:12:44.248  		
00:12:44.248  		'
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:12:44.248     23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:12:44.248     23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:12:44.248     23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob
00:12:44.248     23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:44.248     23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:44.248     23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:44.248      23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:44.248      23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:44.248      23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:44.248      23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH
00:12:44.248      23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:12:44.248  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:12:44.248    23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable
00:12:44.248   23:54:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=()
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=()
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=()
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=()
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=()
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=()
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=()
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:12:50.820  Found 0000:af:00.0 (0x8086 - 0x159b)
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:12:50.820   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:12:50.821  Found 0000:af:00.1 (0x8086 - 0x159b)
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:12:50.821  Found net devices under 0000:af:00.0: cvl_0_0
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:12:50.821  Found net devices under 0000:af:00.1: cvl_0_1
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:12:50.821  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:12:50.821  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms
00:12:50.821  
00:12:50.821  --- 10.0.0.2 ping statistics ---
00:12:50.821  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:50.821  rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:12:50.821  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:12:50.821  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms
00:12:50.821  
00:12:50.821  --- 10.0.0.1 ping statistics ---
00:12:50.821  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:12:50.821  rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2957742
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2957742
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2957742 ']'
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:50.821  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:50.821   23:54:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:12:50.821  [2024-12-09 23:54:06.015883] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:12:50.821  [2024-12-09 23:54:06.015926] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:12:50.821  [2024-12-09 23:54:06.098234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:50.821  [2024-12-09 23:54:06.138357] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:12:50.821  [2024-12-09 23:54:06.138398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:12:50.821  [2024-12-09 23:54:06.138405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:12:50.821  [2024-12-09 23:54:06.138411] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:12:50.821  [2024-12-09 23:54:06.138417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:12:50.821  [2024-12-09 23:54:06.138904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:12:51.081  [2024-12-09 23:54:06.876580] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:12:51.081  Malloc0
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:12:51.081   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:12:51.082  [2024-12-09 23:54:06.926787] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2958021
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2958021 /var/tmp/bdevperf.sock
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2958021 ']'
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:12:51.082  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:51.082   23:54:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:12:51.341  [2024-12-09 23:54:06.975572] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:12:51.341  [2024-12-09 23:54:06.975613] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958021 ]
00:12:51.341  [2024-12-09 23:54:07.049311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:51.341  [2024-12-09 23:54:07.090372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:51.341   23:54:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:51.341   23:54:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:12:51.341   23:54:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:12:51.341   23:54:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:51.341   23:54:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:12:51.599  NVMe0n1
00:12:51.599   23:54:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:51.599   23:54:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:12:51.859  Running I/O for 10 seconds...
00:12:53.734      12278.00 IOPS,    47.96 MiB/s
[2024-12-09T22:54:10.527Z]     12287.50 IOPS,    48.00 MiB/s
[2024-12-09T22:54:11.904Z]     12335.00 IOPS,    48.18 MiB/s
[2024-12-09T22:54:12.840Z]     12451.25 IOPS,    48.64 MiB/s
[2024-12-09T22:54:13.778Z]     12477.00 IOPS,    48.74 MiB/s
[2024-12-09T22:54:14.715Z]     12471.83 IOPS,    48.72 MiB/s
[2024-12-09T22:54:15.652Z]     12544.43 IOPS,    49.00 MiB/s
[2024-12-09T22:54:16.590Z]     12547.88 IOPS,    49.02 MiB/s
[2024-12-09T22:54:17.528Z]     12585.67 IOPS,    49.16 MiB/s
[2024-12-09T22:54:17.787Z]     12581.20 IOPS,    49.15 MiB/s
00:13:01.930                                                                                                  Latency(us)
00:13:01.930  
[2024-12-09T22:54:17.787Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:01.930  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096)
00:13:01.930  	 Verification LBA range: start 0x0 length 0x4000
00:13:01.930  	 NVMe0n1             :      10.06   12601.88      49.23       0.00     0.00   81009.40   18974.23   52179.14
00:13:01.930  
[2024-12-09T22:54:17.787Z]  ===================================================================================================================
00:13:01.930  
[2024-12-09T22:54:17.787Z]  Total                       :              12601.88      49.23       0.00     0.00   81009.40   18974.23   52179.14
00:13:01.930  {
00:13:01.930    "results": [
00:13:01.930      {
00:13:01.930        "job": "NVMe0n1",
00:13:01.930        "core_mask": "0x1",
00:13:01.930        "workload": "verify",
00:13:01.930        "status": "finished",
00:13:01.930        "verify_range": {
00:13:01.930          "start": 0,
00:13:01.930          "length": 16384
00:13:01.930        },
00:13:01.930        "queue_depth": 1024,
00:13:01.930        "io_size": 4096,
00:13:01.930        "runtime": 10.064611,
00:13:01.930        "iops": 12601.878006015335,
00:13:01.930        "mibps": 49.2260859609974,
00:13:01.930        "io_failed": 0,
00:13:01.930        "io_timeout": 0,
00:13:01.930        "avg_latency_us": 81009.40299830337,
00:13:01.930        "min_latency_us": 18974.23238095238,
00:13:01.930        "max_latency_us": 52179.13904761905
00:13:01.930      }
00:13:01.930    ],
00:13:01.930    "core_count": 1
00:13:01.930  }
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2958021
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2958021 ']'
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2958021
00:13:01.930    23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:01.930    23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2958021
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2958021'
00:13:01.930  killing process with pid 2958021
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2958021
00:13:01.930  Received shutdown signal, test time was about 10.000000 seconds
00:13:01.930  
00:13:01.930                                                                                                  Latency(us)
00:13:01.930  
[2024-12-09T22:54:17.787Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:01.930  
[2024-12-09T22:54:17.787Z]  ===================================================================================================================
00:13:01.930  
[2024-12-09T22:54:17.787Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2958021
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20}
00:13:01.930   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:13:02.189  rmmod nvme_tcp
00:13:02.189  rmmod nvme_fabrics
00:13:02.189  rmmod nvme_keyring
00:13:02.189   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:13:02.189   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e
00:13:02.189   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0
00:13:02.189   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2957742 ']'
00:13:02.189   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2957742
00:13:02.189   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2957742 ']'
00:13:02.189   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2957742
00:13:02.189    23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:13:02.189   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:02.189    23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2957742
00:13:02.189   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:13:02.189   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:13:02.189   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2957742'
00:13:02.189  killing process with pid 2957742
00:13:02.189   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2957742
00:13:02.189   23:54:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2957742
00:13:02.448   23:54:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:13:02.448   23:54:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:13:02.448   23:54:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:13:02.448   23:54:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr
00:13:02.448   23:54:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save
00:13:02.448   23:54:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:13:02.448   23:54:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore
00:13:02.448   23:54:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:13:02.448   23:54:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns
00:13:02.448   23:54:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:13:02.448   23:54:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:13:02.448    23:54:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:13:04.353   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:13:04.353  
00:13:04.353  real	0m20.317s
00:13:04.353  user	0m23.875s
00:13:04.353  sys	0m6.052s
00:13:04.353   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:04.353   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:13:04.353  ************************************
00:13:04.353  END TEST nvmf_queue_depth
00:13:04.353  ************************************
00:13:04.353   23:54:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp
00:13:04.353   23:54:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:04.353   23:54:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:04.353   23:54:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:13:04.613  ************************************
00:13:04.613  START TEST nvmf_target_multipath
00:13:04.613  ************************************
00:13:04.613   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp
00:13:04.613  * Looking for test storage...
00:13:04.613  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-:
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-:
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<'
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:13:04.613  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.613  		--rc genhtml_branch_coverage=1
00:13:04.613  		--rc genhtml_function_coverage=1
00:13:04.613  		--rc genhtml_legend=1
00:13:04.613  		--rc geninfo_all_blocks=1
00:13:04.613  		--rc geninfo_unexecuted_blocks=1
00:13:04.613  		
00:13:04.613  		'
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:13:04.613  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.613  		--rc genhtml_branch_coverage=1
00:13:04.613  		--rc genhtml_function_coverage=1
00:13:04.613  		--rc genhtml_legend=1
00:13:04.613  		--rc geninfo_all_blocks=1
00:13:04.613  		--rc geninfo_unexecuted_blocks=1
00:13:04.613  		
00:13:04.613  		'
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:13:04.613  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.613  		--rc genhtml_branch_coverage=1
00:13:04.613  		--rc genhtml_function_coverage=1
00:13:04.613  		--rc genhtml_legend=1
00:13:04.613  		--rc geninfo_all_blocks=1
00:13:04.613  		--rc geninfo_unexecuted_blocks=1
00:13:04.613  		
00:13:04.613  		'
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:13:04.613  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.613  		--rc genhtml_branch_coverage=1
00:13:04.613  		--rc genhtml_function_coverage=1
00:13:04.613  		--rc genhtml_legend=1
00:13:04.613  		--rc geninfo_all_blocks=1
00:13:04.613  		--rc geninfo_unexecuted_blocks=1
00:13:04.613  		
00:13:04.613  		'
00:13:04.613   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:13:04.613    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:04.613     23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:04.613      23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:04.613      23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:04.613      23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:04.613      23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH
00:13:04.614      23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:04.614    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0
00:13:04.614    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:13:04.614    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:13:04.614    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:13:04.614    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:13:04.614    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:13:04.614    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:13:04.614  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:13:04.614    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:13:04.614    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:13:04.614    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:13:04.614    23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable
00:13:04.614   23:54:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:13:11.188   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=()
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=()
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=()
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=()
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=()
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=()
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=()
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:13:11.189  Found 0000:af:00.0 (0x8086 - 0x159b)
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:13:11.189  Found 0000:af:00.1 (0x8086 - 0x159b)
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:13:11.189  Found net devices under 0000:af:00.0: cvl_0_0
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:13:11.189  Found net devices under 0000:af:00.1: cvl_0_1
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:13:11.189  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:13:11.189  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms
00:13:11.189  
00:13:11.189  --- 10.0.0.2 ping statistics ---
00:13:11.189  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:13:11.189  rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms
00:13:11.189   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:13:11.189  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:13:11.189  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms
00:13:11.189  
00:13:11.190  --- 10.0.0.1 ping statistics ---
00:13:11.190  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:13:11.190  rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']'
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test'
00:13:11.190  only one NIC for nvmf test
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20}
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:13:11.190  rmmod nvme_tcp
00:13:11.190  rmmod nvme_fabrics
00:13:11.190  rmmod nvme_keyring
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:13:11.190   23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:13:11.190    23:54:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20}
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:13:13.098    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:13:13.098  
00:13:13.098  real	0m8.295s
00:13:13.098  user	0m1.770s
00:13:13.098  sys	0m4.547s
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:13:13.098  ************************************
00:13:13.098  END TEST nvmf_target_multipath
00:13:13.098  ************************************
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:13:13.098  ************************************
00:13:13.098  START TEST nvmf_zcopy
00:13:13.098  ************************************
00:13:13.098   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp
00:13:13.098  * Looking for test storage...
00:13:13.098  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:13:13.098    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:13:13.098     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version
00:13:13.098     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:13:13.098    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:13:13.098    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:13.098    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:13.098    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:13.098    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-:
00:13:13.098    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1
00:13:13.098    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-:
00:13:13.098    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2
00:13:13.098    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<'
00:13:13.098    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:13:13.099  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:13.099  		--rc genhtml_branch_coverage=1
00:13:13.099  		--rc genhtml_function_coverage=1
00:13:13.099  		--rc genhtml_legend=1
00:13:13.099  		--rc geninfo_all_blocks=1
00:13:13.099  		--rc geninfo_unexecuted_blocks=1
00:13:13.099  		
00:13:13.099  		'
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:13:13.099  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:13.099  		--rc genhtml_branch_coverage=1
00:13:13.099  		--rc genhtml_function_coverage=1
00:13:13.099  		--rc genhtml_legend=1
00:13:13.099  		--rc geninfo_all_blocks=1
00:13:13.099  		--rc geninfo_unexecuted_blocks=1
00:13:13.099  		
00:13:13.099  		'
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:13:13.099  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:13.099  		--rc genhtml_branch_coverage=1
00:13:13.099  		--rc genhtml_function_coverage=1
00:13:13.099  		--rc genhtml_legend=1
00:13:13.099  		--rc geninfo_all_blocks=1
00:13:13.099  		--rc geninfo_unexecuted_blocks=1
00:13:13.099  		
00:13:13.099  		'
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:13:13.099  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:13.099  		--rc genhtml_branch_coverage=1
00:13:13.099  		--rc genhtml_function_coverage=1
00:13:13.099  		--rc genhtml_legend=1
00:13:13.099  		--rc geninfo_all_blocks=1
00:13:13.099  		--rc geninfo_unexecuted_blocks=1
00:13:13.099  		
00:13:13.099  		'
00:13:13.099   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:13.099     23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:13.099      23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:13.099      23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:13.099      23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:13.099      23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH
00:13:13.099      23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:13:13.099  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0
00:13:13.099   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit
00:13:13.099   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:13:13.099   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:13:13.099   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs
00:13:13.099   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no
00:13:13.099   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns
00:13:13.099   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:13:13.099   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:13:13.099    23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:13:13.099   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:13:13.099   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:13:13.099   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable
00:13:13.099   23:54:28 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=()
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=()
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=()
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=()
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=()
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=()
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=()
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:13:19.797  Found 0000:af:00.0 (0x8086 - 0x159b)
00:13:19.797   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:13:19.798  Found 0000:af:00.1 (0x8086 - 0x159b)
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:13:19.798  Found net devices under 0000:af:00.0: cvl_0_0
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:13:19.798  Found net devices under 0000:af:00.1: cvl_0_1
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:13:19.798  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:13:19.798  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms
00:13:19.798  
00:13:19.798  --- 10.0.0.2 ping statistics ---
00:13:19.798  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:13:19.798  rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:13:19.798  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:13:19.798  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms
00:13:19.798  
00:13:19.798  --- 10.0.0.1 ping statistics ---
00:13:19.798  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:13:19.798  rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2967104
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2967104
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2967104 ']'
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:19.798  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:19.798   23:54:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:19.798  [2024-12-09 23:54:34.861692] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:13:19.798  [2024-12-09 23:54:34.861740] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:19.798  [2024-12-09 23:54:34.941770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:19.798  [2024-12-09 23:54:34.978756] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:13:19.798  [2024-12-09 23:54:34.978790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:13:19.798  [2024-12-09 23:54:34.978798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:13:19.798  [2024-12-09 23:54:34.978803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:13:19.798  [2024-12-09 23:54:34.978808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:13:19.798  [2024-12-09 23:54:34.979316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:19.798   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:19.798   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0
00:13:19.798   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:13:19.798   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable
00:13:19.798   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:19.798   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:13:19.798   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']'
00:13:19.798   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy
00:13:19.798   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:19.798   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:19.799  [2024-12-09 23:54:35.126277] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:19.799  [2024-12-09 23:54:35.150480] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:19.799  malloc0
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:19.799   23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192
00:13:19.799    23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json
00:13:19.799    23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=()
00:13:19.799    23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config
00:13:19.799    23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:13:19.799    23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:13:19.799  {
00:13:19.799    "params": {
00:13:19.799      "name": "Nvme$subsystem",
00:13:19.799      "trtype": "$TEST_TRANSPORT",
00:13:19.799      "traddr": "$NVMF_FIRST_TARGET_IP",
00:13:19.799      "adrfam": "ipv4",
00:13:19.799      "trsvcid": "$NVMF_PORT",
00:13:19.799      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:13:19.799      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:13:19.799      "hdgst": ${hdgst:-false},
00:13:19.799      "ddgst": ${ddgst:-false}
00:13:19.799    },
00:13:19.799    "method": "bdev_nvme_attach_controller"
00:13:19.799  }
00:13:19.799  EOF
00:13:19.799  )")
00:13:19.799     23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat
00:13:19.799    23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq .
00:13:19.799     23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=,
00:13:19.799     23:54:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:13:19.799    "params": {
00:13:19.799      "name": "Nvme1",
00:13:19.799      "trtype": "tcp",
00:13:19.799      "traddr": "10.0.0.2",
00:13:19.799      "adrfam": "ipv4",
00:13:19.799      "trsvcid": "4420",
00:13:19.799      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:13:19.799      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:13:19.799      "hdgst": false,
00:13:19.799      "ddgst": false
00:13:19.799    },
00:13:19.799    "method": "bdev_nvme_attach_controller"
00:13:19.799  }'
00:13:19.799  [2024-12-09 23:54:35.235972] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:13:19.799  [2024-12-09 23:54:35.236015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2967127 ]
00:13:19.799  [2024-12-09 23:54:35.306989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:19.799  [2024-12-09 23:54:35.346269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:19.799  Running I/O for 10 seconds...
00:13:22.111       8781.00 IOPS,    68.60 MiB/s
[2024-12-09T22:54:38.912Z]      8802.50 IOPS,    68.77 MiB/s
[2024-12-09T22:54:39.849Z]      8823.67 IOPS,    68.93 MiB/s
[2024-12-09T22:54:40.786Z]      8830.75 IOPS,    68.99 MiB/s
[2024-12-09T22:54:41.744Z]      8814.20 IOPS,    68.86 MiB/s
[2024-12-09T22:54:42.701Z]      8812.67 IOPS,    68.85 MiB/s
[2024-12-09T22:54:43.638Z]      8801.57 IOPS,    68.76 MiB/s
[2024-12-09T22:54:44.575Z]      8810.38 IOPS,    68.83 MiB/s
[2024-12-09T22:54:45.959Z]      8822.56 IOPS,    68.93 MiB/s
[2024-12-09T22:54:45.959Z]      8826.80 IOPS,    68.96 MiB/s
00:13:30.102                                                                                                  Latency(us)
00:13:30.102  
[2024-12-09T22:54:45.959Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:30.102  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192)
00:13:30.102  	 Verification LBA range: start 0x0 length 0x1000
00:13:30.102  	 Nvme1n1             :      10.01    8832.25      69.00       0.00     0.00   14451.22     674.86   22344.66
00:13:30.102  
[2024-12-09T22:54:45.959Z]  ===================================================================================================================
00:13:30.102  
[2024-12-09T22:54:45.959Z]  Total                       :               8832.25      69.00       0.00     0.00   14451.22     674.86   22344.66
00:13:30.102   23:54:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2968914
00:13:30.102   23:54:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable
00:13:30.102   23:54:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:30.102   23:54:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192
00:13:30.102    23:54:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json
00:13:30.102    23:54:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=()
00:13:30.102    23:54:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config
00:13:30.102    23:54:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:13:30.102    23:54:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:13:30.102  {
00:13:30.102    "params": {
00:13:30.102      "name": "Nvme$subsystem",
00:13:30.102      "trtype": "$TEST_TRANSPORT",
00:13:30.102      "traddr": "$NVMF_FIRST_TARGET_IP",
00:13:30.102      "adrfam": "ipv4",
00:13:30.102      "trsvcid": "$NVMF_PORT",
00:13:30.102      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:13:30.102      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:13:30.102      "hdgst": ${hdgst:-false},
00:13:30.102      "ddgst": ${ddgst:-false}
00:13:30.102    },
00:13:30.102    "method": "bdev_nvme_attach_controller"
00:13:30.102  }
00:13:30.102  EOF
00:13:30.102  )")
00:13:30.102  [2024-12-09 23:54:45.744561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.102  [2024-12-09 23:54:45.744599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.102     23:54:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat
00:13:30.102    23:54:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq .
00:13:30.102     23:54:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=,
00:13:30.102     23:54:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:13:30.102    "params": {
00:13:30.102      "name": "Nvme1",
00:13:30.102      "trtype": "tcp",
00:13:30.102      "traddr": "10.0.0.2",
00:13:30.102      "adrfam": "ipv4",
00:13:30.102      "trsvcid": "4420",
00:13:30.102      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:13:30.102      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:13:30.102      "hdgst": false,
00:13:30.102      "ddgst": false
00:13:30.102    },
00:13:30.102    "method": "bdev_nvme_attach_controller"
00:13:30.102  }'
00:13:30.102  [2024-12-09 23:54:45.756556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.102  [2024-12-09 23:54:45.756569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.102  [2024-12-09 23:54:45.768586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.102  [2024-12-09 23:54:45.768595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.102  [2024-12-09 23:54:45.780615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.102  [2024-12-09 23:54:45.780624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.102  [2024-12-09 23:54:45.783911] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:13:30.102  [2024-12-09 23:54:45.783951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2968914 ]
00:13:30.102  [2024-12-09 23:54:45.792649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.102  [2024-12-09 23:54:45.792659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.102  [2024-12-09 23:54:45.804681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.102  [2024-12-09 23:54:45.804690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.102  [2024-12-09 23:54:45.816713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.102  [2024-12-09 23:54:45.816722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.102  [2024-12-09 23:54:45.828742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.102  [2024-12-09 23:54:45.828751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.102  [2024-12-09 23:54:45.840774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.103  [2024-12-09 23:54:45.840783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.103  [2024-12-09 23:54:45.852807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.103  [2024-12-09 23:54:45.852815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.103  [2024-12-09 23:54:45.857288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:30.103  [2024-12-09 23:54:45.864843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.103  [2024-12-09 23:54:45.864854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.103  [2024-12-09 23:54:45.876877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.103  [2024-12-09 23:54:45.876890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.103  [2024-12-09 23:54:45.888910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.103  [2024-12-09 23:54:45.888920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.103  [2024-12-09 23:54:45.897613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:30.103  [2024-12-09 23:54:45.900950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.103  [2024-12-09 23:54:45.900965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.103  [2024-12-09 23:54:45.912985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.103  [2024-12-09 23:54:45.913000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.103  [2024-12-09 23:54:45.925013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.103  [2024-12-09 23:54:45.925033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.103  [2024-12-09 23:54:45.937041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.103  [2024-12-09 23:54:45.937054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.103  [2024-12-09 23:54:45.949071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.103  [2024-12-09 23:54:45.949083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:45.961105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:45.961116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:45.973133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:45.973149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:45.985164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:45.985178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:45.997211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:45.997230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.009234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.009248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.021266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.021281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.033299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.033314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.045327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.045336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.057368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.057385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  Running I/O for 5 seconds...
00:13:30.362  [2024-12-09 23:54:46.073772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.073792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.087795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.087830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.101642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.101660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.115876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.115894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.129869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.129887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.143591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.143609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.157574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.157593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.171606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.171624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.185236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.185254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.198619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.198637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.362  [2024-12-09 23:54:46.212207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.362  [2024-12-09 23:54:46.212225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.225757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.225775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.239323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.239341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.253018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.253036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.267313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.267332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.280910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.280927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.295121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.295139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.305748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.305767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.319965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.319983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.333459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.333477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.347057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.347074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.360808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.360826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.374468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.374486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.388028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.388047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.401723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.401741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.415834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.415853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.429651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.429668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.443380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.443399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.456771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.456788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.622  [2024-12-09 23:54:46.471052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.622  [2024-12-09 23:54:46.471070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.484888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.484907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.498826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.498847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.512436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.512456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.526410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.526430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.540642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.540661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.554686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.554705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.568551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.568570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.582403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.582421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.595964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.595983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.609629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.609647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.623547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.623564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.637205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.637224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.651380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.651397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.664973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.664991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.678933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.678951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.692986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.693005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.706438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.706457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.720040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.720058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:30.882  [2024-12-09 23:54:46.734010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:30.882  [2024-12-09 23:54:46.734028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.141  [2024-12-09 23:54:46.747818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.141  [2024-12-09 23:54:46.747837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.141  [2024-12-09 23:54:46.761737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.141  [2024-12-09 23:54:46.761756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.141  [2024-12-09 23:54:46.775376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.141  [2024-12-09 23:54:46.775394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.141  [2024-12-09 23:54:46.788987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.141  [2024-12-09 23:54:46.789005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.141  [2024-12-09 23:54:46.802768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.141  [2024-12-09 23:54:46.802787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.141  [2024-12-09 23:54:46.816960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.141  [2024-12-09 23:54:46.816978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.142  [2024-12-09 23:54:46.830998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.142  [2024-12-09 23:54:46.831016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.142  [2024-12-09 23:54:46.844567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.142  [2024-12-09 23:54:46.844585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.142  [2024-12-09 23:54:46.858558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.142  [2024-12-09 23:54:46.858576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.142  [2024-12-09 23:54:46.871972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.142  [2024-12-09 23:54:46.871990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.142  [2024-12-09 23:54:46.885736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.142  [2024-12-09 23:54:46.885754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.142  [2024-12-09 23:54:46.899747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.142  [2024-12-09 23:54:46.899765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.142  [2024-12-09 23:54:46.913109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.142  [2024-12-09 23:54:46.913127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.142  [2024-12-09 23:54:46.926836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.142  [2024-12-09 23:54:46.926854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.142  [2024-12-09 23:54:46.940671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.142  [2024-12-09 23:54:46.940689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.142  [2024-12-09 23:54:46.954610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.142  [2024-12-09 23:54:46.954628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.142  [2024-12-09 23:54:46.968597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.142  [2024-12-09 23:54:46.968615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.142  [2024-12-09 23:54:46.982126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.142  [2024-12-09 23:54:46.982145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.142  [2024-12-09 23:54:46.995811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.142  [2024-12-09 23:54:46.995830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.009439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.009456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.023228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.023247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.036985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.037003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.050613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.050631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401      16872.00 IOPS,   131.81 MiB/s
[2024-12-09T22:54:47.258Z] [2024-12-09 23:54:47.064435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.064452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.077655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.077673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.091483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.091501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.105024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.105042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.118399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.118417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.132628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.132646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.146329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.146347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.159924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.159942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.173032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.173049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.186955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.186973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.200657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.200675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.214415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.214433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.228357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.228375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.241885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.241903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.401  [2024-12-09 23:54:47.255486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.401  [2024-12-09 23:54:47.255513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.660  [2024-12-09 23:54:47.269173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.660  [2024-12-09 23:54:47.269190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.660  [2024-12-09 23:54:47.282711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.660  [2024-12-09 23:54:47.282729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.660  [2024-12-09 23:54:47.296190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.660  [2024-12-09 23:54:47.296207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.660  [2024-12-09 23:54:47.310157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.660  [2024-12-09 23:54:47.310181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.660  [2024-12-09 23:54:47.323728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.660  [2024-12-09 23:54:47.323746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.660  [2024-12-09 23:54:47.337232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.660  [2024-12-09 23:54:47.337250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.660  [2024-12-09 23:54:47.350769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.660  [2024-12-09 23:54:47.350786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.660  [2024-12-09 23:54:47.364656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.660  [2024-12-09 23:54:47.364674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.661  [2024-12-09 23:54:47.378737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.661  [2024-12-09 23:54:47.378755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.661  [2024-12-09 23:54:47.392301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.661  [2024-12-09 23:54:47.392319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.661  [2024-12-09 23:54:47.406123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.661  [2024-12-09 23:54:47.406141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.661  [2024-12-09 23:54:47.420058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.661  [2024-12-09 23:54:47.420076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.661  [2024-12-09 23:54:47.433583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.661  [2024-12-09 23:54:47.433601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.661  [2024-12-09 23:54:47.447261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.661  [2024-12-09 23:54:47.447279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.661  [2024-12-09 23:54:47.460960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.661  [2024-12-09 23:54:47.460978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.661  [2024-12-09 23:54:47.474825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.661  [2024-12-09 23:54:47.474843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.661  [2024-12-09 23:54:47.488252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.661  [2024-12-09 23:54:47.488270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.661  [2024-12-09 23:54:47.501695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.661  [2024-12-09 23:54:47.501713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.661  [2024-12-09 23:54:47.516114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.661  [2024-12-09 23:54:47.516137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.531386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.531406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.545391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.545409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.559237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.559256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.573174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.573192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.586808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.586826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.600757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.600774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.614468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.614485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.628383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.628400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.642260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.642278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.655933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.655951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.669570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.669589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.683277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.683295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.696567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.696585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.710346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.710364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.723902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.723920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.737875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.737893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.751820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.751838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:31.920  [2024-12-09 23:54:47.765384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:31.920  [2024-12-09 23:54:47.765402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.779245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.779268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.793146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.793164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.807039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.807057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.820731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.820748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.834569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.834587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.848148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.848175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.861809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.861827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.875216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.875236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.889286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.889305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.903082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.903100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.916759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.916778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.930309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.930328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.945109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.945128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.959026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.959044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.972796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.972813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:47.986702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:47.986720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:48.000323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:48.000341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:48.013995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:48.014013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.180  [2024-12-09 23:54:48.027771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.180  [2024-12-09 23:54:48.027790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.041637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.041656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.055678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.055696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440      17019.00 IOPS,   132.96 MiB/s
[2024-12-09T22:54:48.297Z] [2024-12-09 23:54:48.069264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.069283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.083187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.083206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.096959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.096977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.110671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.110690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.124390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.124408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.137878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.137896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.151382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.151400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.164870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.164888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.179009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.179028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.193089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.193107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.203860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.203878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.217814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.217831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.231889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.231906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.242591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.242608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.256598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.256616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.270093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.270111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.283914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.283931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.440  [2024-12-09 23:54:48.297426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.440  [2024-12-09 23:54:48.297444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.311039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.311057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.324646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.324664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.338258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.338276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.352100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.352117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.365859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.365876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.379510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.379528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.393144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.393162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.406538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.406556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.420486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.420504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.434347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.434365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.448208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.448226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.461777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.461795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.475308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.475326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.489644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.489662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.505119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.505138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.519092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.519111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.532790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.532808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.700  [2024-12-09 23:54:48.546446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.700  [2024-12-09 23:54:48.546470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.959  [2024-12-09 23:54:48.559854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.959  [2024-12-09 23:54:48.559873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.959  [2024-12-09 23:54:48.573801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.959  [2024-12-09 23:54:48.573819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.959  [2024-12-09 23:54:48.587462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.959  [2024-12-09 23:54:48.587480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.959  [2024-12-09 23:54:48.600773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.959  [2024-12-09 23:54:48.600790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.959  [2024-12-09 23:54:48.614684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.959  [2024-12-09 23:54:48.614702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.959  [2024-12-09 23:54:48.628287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.959  [2024-12-09 23:54:48.628304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.959  [2024-12-09 23:54:48.641932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.959  [2024-12-09 23:54:48.641950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.959  [2024-12-09 23:54:48.655416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.959  [2024-12-09 23:54:48.655433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.959  [2024-12-09 23:54:48.669045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.959  [2024-12-09 23:54:48.669063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.959  [2024-12-09 23:54:48.682531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.959  [2024-12-09 23:54:48.682549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.959  [2024-12-09 23:54:48.696266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.959  [2024-12-09 23:54:48.696284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.959  [2024-12-09 23:54:48.709862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.960  [2024-12-09 23:54:48.709880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.960  [2024-12-09 23:54:48.723381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.960  [2024-12-09 23:54:48.723398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.960  [2024-12-09 23:54:48.737045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.960  [2024-12-09 23:54:48.737063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.960  [2024-12-09 23:54:48.750607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.960  [2024-12-09 23:54:48.750625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.960  [2024-12-09 23:54:48.764594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.960  [2024-12-09 23:54:48.764612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.960  [2024-12-09 23:54:48.778720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.960  [2024-12-09 23:54:48.778739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.960  [2024-12-09 23:54:48.792422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.960  [2024-12-09 23:54:48.792441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:32.960  [2024-12-09 23:54:48.806040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:32.960  [2024-12-09 23:54:48.806063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.819716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.819735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.833199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.833216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.847118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.847136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.860659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.860677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.874383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.874400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.887837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.887855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.901305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.901323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.914496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.914514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.928379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.928397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.941964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.941982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.955501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.955519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.969233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.969251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.982905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.982923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:48.996438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:48.996456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:49.010234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:49.010251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:49.023424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:49.023441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:49.037195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:49.037212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:49.051045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:49.051063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.219  [2024-12-09 23:54:49.064866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.219  [2024-12-09 23:54:49.064889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.478      17042.00 IOPS,   133.14 MiB/s
[2024-12-09T22:54:49.336Z] [2024-12-09 23:54:49.078797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.078815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.092694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.092712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.106960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.106978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.120828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.120847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.134480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.134499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.148290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.148309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.162065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.162084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.175894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.175912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.190054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.190072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.200481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.200500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.214670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.214688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.228625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.228644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.242229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.242248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.255833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.255851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.270002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.270020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.283462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.283480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.297388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.297406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.310807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.310825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.479  [2024-12-09 23:54:49.324772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.479  [2024-12-09 23:54:49.324791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.338696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.338715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.350062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.350080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.364137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.364155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.377556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.377574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.391146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.391172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.405240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.405259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.419162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.419186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.432892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.432909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.446802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.446820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.460858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.460876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.474626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.474645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.488528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.488547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.502155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.502182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.515838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.515857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.529873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.529892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.543377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.543396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.557498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.557517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.571622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.571642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.738  [2024-12-09 23:54:49.585269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.738  [2024-12-09 23:54:49.585289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.599183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.599202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.612819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.612837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.626879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.626896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.640231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.640249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.653943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.653961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.667942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.667960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.681851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.681868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.695656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.695673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.709140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.709158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.722673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.722692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.736196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.736213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.749989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.750006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.763486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.763504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.777398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.777416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.790845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.790864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.804669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.804686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.818314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.818331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.831850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.831868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:33.998  [2024-12-09 23:54:49.845792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:33.998  [2024-12-09 23:54:49.845809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:49.859862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:49.859880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:49.873645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:49.873662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:49.887429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:49.887446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:49.901177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:49.901194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:49.914847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:49.914865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:49.928581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:49.928598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:49.942250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:49.942267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:49.956099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:49.956117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:49.969765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:49.969782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:49.983198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:49.983215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:49.996958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:49.996976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:50.010854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:50.010873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:50.025782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:50.025801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:50.039600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:50.039618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:50.054035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:50.054053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257  [2024-12-09 23:54:50.067631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.257  [2024-12-09 23:54:50.067648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.257      17026.75 IOPS,   133.02 MiB/s
[2024-12-09T22:54:50.114Z] [2024-12-09 23:54:50.081568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.258  [2024-12-09 23:54:50.081586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.258  [2024-12-09 23:54:50.095186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.258  [2024-12-09 23:54:50.095208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.258  [2024-12-09 23:54:50.109362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.258  [2024-12-09 23:54:50.109381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.516  [2024-12-09 23:54:50.123342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.516  [2024-12-09 23:54:50.123360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.516  [2024-12-09 23:54:50.137087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.516  [2024-12-09 23:54:50.137105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.516  [2024-12-09 23:54:50.150654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.516  [2024-12-09 23:54:50.150672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.516  [2024-12-09 23:54:50.164651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.516  [2024-12-09 23:54:50.164669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.516  [2024-12-09 23:54:50.179264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.516  [2024-12-09 23:54:50.179282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.516  [2024-12-09 23:54:50.195101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.516  [2024-12-09 23:54:50.195119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.516  [2024-12-09 23:54:50.208758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.516  [2024-12-09 23:54:50.208776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.516  [2024-12-09 23:54:50.222078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.516  [2024-12-09 23:54:50.222097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.516  [2024-12-09 23:54:50.236255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.516  [2024-12-09 23:54:50.236273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.517  [2024-12-09 23:54:50.252104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.517  [2024-12-09 23:54:50.252122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.517  [2024-12-09 23:54:50.265906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.517  [2024-12-09 23:54:50.265924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.517  [2024-12-09 23:54:50.280052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.517  [2024-12-09 23:54:50.280070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.517  [2024-12-09 23:54:50.293827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.517  [2024-12-09 23:54:50.293846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.517  [2024-12-09 23:54:50.307212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.517  [2024-12-09 23:54:50.307230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.517  [2024-12-09 23:54:50.320951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.517  [2024-12-09 23:54:50.320969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.517  [2024-12-09 23:54:50.334946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.517  [2024-12-09 23:54:50.334964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.517  [2024-12-09 23:54:50.348595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.517  [2024-12-09 23:54:50.348613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.517  [2024-12-09 23:54:50.362305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.517  [2024-12-09 23:54:50.362328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.376211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.376229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.389707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.389724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.403093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.403111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.416555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.416574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.430343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.430361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.444157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.444184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.458080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.458098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.471943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.471961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.486020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.486038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.500238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.500256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.514016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.514033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.527793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.527811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.541643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.541661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.555970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.555988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.571390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.571409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.585284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.585304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.599390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.599410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.613202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.613221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:34.776  [2024-12-09 23:54:50.627178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:34.776  [2024-12-09 23:54:50.627200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.035  [2024-12-09 23:54:50.641343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.035  [2024-12-09 23:54:50.641362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.035  [2024-12-09 23:54:50.655378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.035  [2024-12-09 23:54:50.655396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.035  [2024-12-09 23:54:50.669275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.035  [2024-12-09 23:54:50.669297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.035  [2024-12-09 23:54:50.683378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.035  [2024-12-09 23:54:50.683397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.035  [2024-12-09 23:54:50.697263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.035  [2024-12-09 23:54:50.697281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.035  [2024-12-09 23:54:50.710948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.035  [2024-12-09 23:54:50.710967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.035  [2024-12-09 23:54:50.724631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.035  [2024-12-09 23:54:50.724650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.035  [2024-12-09 23:54:50.738015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.035  [2024-12-09 23:54:50.738033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.035  [2024-12-09 23:54:50.752125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.035  [2024-12-09 23:54:50.752143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.035  [2024-12-09 23:54:50.766028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.036  [2024-12-09 23:54:50.766047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.036  [2024-12-09 23:54:50.779867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.036  [2024-12-09 23:54:50.779885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.036  [2024-12-09 23:54:50.793522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.036  [2024-12-09 23:54:50.793541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.036  [2024-12-09 23:54:50.806884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.036  [2024-12-09 23:54:50.806902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.036  [2024-12-09 23:54:50.820873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.036  [2024-12-09 23:54:50.820892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.036  [2024-12-09 23:54:50.834674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.036  [2024-12-09 23:54:50.834692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.036  [2024-12-09 23:54:50.848801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.036  [2024-12-09 23:54:50.848820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.036  [2024-12-09 23:54:50.860527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.036  [2024-12-09 23:54:50.860545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.036  [2024-12-09 23:54:50.874939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.036  [2024-12-09 23:54:50.874961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.036  [2024-12-09 23:54:50.888822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.036  [2024-12-09 23:54:50.888845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:50.902630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:50.902648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:50.916675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:50.916694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:50.930743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:50.930761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:50.944339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:50.944357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:50.958134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:50.958152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:50.971626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:50.971644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:50.985597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:50.985617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:50.999353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:50.999371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:51.012809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:51.012827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:51.026643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:51.026661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:51.040395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:51.040412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:51.053964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:51.053981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:51.067780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:51.067798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295      17021.40 IOPS,   132.98 MiB/s
[2024-12-09T22:54:51.152Z] [2024-12-09 23:54:51.078554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:51.078572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  
00:13:35.295                                                                                                  Latency(us)
00:13:35.295  
[2024-12-09T22:54:51.152Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:35.295  Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192)
00:13:35.295  	 Nvme1n1             :       5.01   17023.36     133.00       0.00     0.00    7511.84    2933.52   18100.42
00:13:35.295  
[2024-12-09T22:54:51.152Z]  ===================================================================================================================
00:13:35.295  
[2024-12-09T22:54:51.152Z]  Total                       :              17023.36     133.00       0.00     0.00    7511.84    2933.52   18100.42
00:13:35.295  [2024-12-09 23:54:51.089990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:51.090005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:51.102026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:51.102041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:51.114062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:51.114083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:51.126087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:51.126101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:51.138118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:51.138131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.295  [2024-12-09 23:54:51.150149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.295  [2024-12-09 23:54:51.150161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.554  [2024-12-09 23:54:51.162182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.554  [2024-12-09 23:54:51.162196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.554  [2024-12-09 23:54:51.174231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.554  [2024-12-09 23:54:51.174245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.554  [2024-12-09 23:54:51.186243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.554  [2024-12-09 23:54:51.186256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.555  [2024-12-09 23:54:51.198273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.555  [2024-12-09 23:54:51.198282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.555  [2024-12-09 23:54:51.210306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.555  [2024-12-09 23:54:51.210318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.555  [2024-12-09 23:54:51.222335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.555  [2024-12-09 23:54:51.222345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.555  [2024-12-09 23:54:51.234368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:13:35.555  [2024-12-09 23:54:51.234377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:35.555  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2968914) - No such process
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2968914
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:35.555  delay0
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:35.555   23:54:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1'
00:13:35.814  [2024-12-09 23:54:51.435354] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:13:42.380  Initializing NVMe Controllers
00:13:42.380  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:13:42.380  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:13:42.380  Initialization complete. Launching workers.
00:13:42.380  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 760
00:13:42.380  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1047, failed to submit 33
00:13:42.380  	 success 876, unsuccessful 171, failed 0
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20}
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:13:42.380  rmmod nvme_tcp
00:13:42.380  rmmod nvme_fabrics
00:13:42.380  rmmod nvme_keyring
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2967104 ']'
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2967104
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2967104 ']'
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2967104
00:13:42.380    23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:42.380    23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2967104
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2967104'
00:13:42.380  killing process with pid 2967104
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2967104
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2967104
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:13:42.380   23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:13:42.380    23:54:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:13:44.286   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:13:44.286  
00:13:44.286  real	0m31.449s
00:13:44.286  user	0m42.022s
00:13:44.286  sys	0m11.034s
00:13:44.286   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:44.286   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:13:44.286  ************************************
00:13:44.286  END TEST nvmf_zcopy
00:13:44.286  ************************************
00:13:44.286   23:55:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp
00:13:44.286   23:55:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:44.286   23:55:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:44.286   23:55:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:13:44.286  ************************************
00:13:44.286  START TEST nvmf_nmic
00:13:44.286  ************************************
00:13:44.286   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp
00:13:44.546  * Looking for test storage...
00:13:44.546  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-:
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-:
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<'
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:13:44.546  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:44.546  		--rc genhtml_branch_coverage=1
00:13:44.546  		--rc genhtml_function_coverage=1
00:13:44.546  		--rc genhtml_legend=1
00:13:44.546  		--rc geninfo_all_blocks=1
00:13:44.546  		--rc geninfo_unexecuted_blocks=1
00:13:44.546  		
00:13:44.546  		'
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:13:44.546  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:44.546  		--rc genhtml_branch_coverage=1
00:13:44.546  		--rc genhtml_function_coverage=1
00:13:44.546  		--rc genhtml_legend=1
00:13:44.546  		--rc geninfo_all_blocks=1
00:13:44.546  		--rc geninfo_unexecuted_blocks=1
00:13:44.546  		
00:13:44.546  		'
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:13:44.546  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:44.546  		--rc genhtml_branch_coverage=1
00:13:44.546  		--rc genhtml_function_coverage=1
00:13:44.546  		--rc genhtml_legend=1
00:13:44.546  		--rc geninfo_all_blocks=1
00:13:44.546  		--rc geninfo_unexecuted_blocks=1
00:13:44.546  		
00:13:44.546  		'
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:13:44.546  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:44.546  		--rc genhtml_branch_coverage=1
00:13:44.546  		--rc genhtml_function_coverage=1
00:13:44.546  		--rc genhtml_legend=1
00:13:44.546  		--rc geninfo_all_blocks=1
00:13:44.546  		--rc geninfo_unexecuted_blocks=1
00:13:44.546  		
00:13:44.546  		'
00:13:44.546   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:13:44.546    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:44.546     23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:44.546      23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:44.547      23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:44.547      23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:44.547      23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH
00:13:44.547      23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:13:44.547    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0
00:13:44.547    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:13:44.547    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:13:44.547    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:13:44.547    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:13:44.547    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:13:44.547    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:13:44.547  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:13:44.547    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:13:44.547    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:13:44.547    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:13:44.547    23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable
00:13:44.547   23:55:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=()
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=()
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=()
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=()
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=()
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=()
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=()
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:13:50.323  Found 0000:af:00.0 (0x8086 - 0x159b)
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:13:50.323  Found 0000:af:00.1 (0x8086 - 0x159b)
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:13:50.323  Found net devices under 0000:af:00.0: cvl_0_0
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]]
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:13:50.323   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:13:50.324  Found net devices under 0000:af:00.1: cvl_0_1
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:13:50.324   23:55:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:13:50.324   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:13:50.324   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:13:50.324   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:13:50.324   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:13:50.324   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:13:50.324   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:13:50.582   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:13:50.582   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:13:50.582   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:13:50.582   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:13:50.582  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:13:50.582  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms
00:13:50.582  
00:13:50.582  --- 10.0.0.2 ping statistics ---
00:13:50.582  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:13:50.583  rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:13:50.583  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:13:50.583  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms
00:13:50.583  
00:13:50.583  --- 10.0.0.1 ping statistics ---
00:13:50.583  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:13:50.583  rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2974397
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2974397
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2974397 ']'
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:50.583  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:50.583   23:55:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:13:50.583  [2024-12-09 23:55:06.339754] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:13:50.583  [2024-12-09 23:55:06.339812] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:50.583  [2024-12-09 23:55:06.419865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:13:50.842  [2024-12-09 23:55:06.463778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:13:50.842  [2024-12-09 23:55:06.463811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:13:50.842  [2024-12-09 23:55:06.463818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:13:50.842  [2024-12-09 23:55:06.463824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:13:50.842  [2024-12-09 23:55:06.463829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:13:50.842  [2024-12-09 23:55:06.465138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:50.842  [2024-12-09 23:55:06.465246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:13:50.842  [2024-12-09 23:55:06.465259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:13:50.842  [2024-12-09 23:55:06.465261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:13:51.410  [2024-12-09 23:55:07.221120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:13:51.410  Malloc0
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.410   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:13:51.670  [2024-12-09 23:55:07.286152] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems'
00:13:51.670  test case1: single bdev can't be used in multiple subsystems
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.670   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:13:51.670  [2024-12-09 23:55:07.314076] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target
00:13:51.670  [2024-12-09 23:55:07.314095] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1
00:13:51.670  [2024-12-09 23:55:07.314102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:13:51.670  request:
00:13:51.670  {
00:13:51.670  "nqn": "nqn.2016-06.io.spdk:cnode2",
00:13:51.670  "namespace": {
00:13:51.670  "bdev_name": "Malloc0",
00:13:51.670  "no_auto_visible": false,
00:13:51.670  "hide_metadata": false
00:13:51.670  },
00:13:51.670  "method": "nvmf_subsystem_add_ns",
00:13:51.670  "req_id": 1
00:13:51.670  }
00:13:51.670  Got JSON-RPC error response
00:13:51.670  response:
00:13:51.670  {
00:13:51.670  "code": -32602,
00:13:51.670  "message": "Invalid parameters"
00:13:51.671  }
00:13:51.671   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:13:51.671   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1
00:13:51.671   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']'
00:13:51.671   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.'
00:13:51.671   Adding namespace failed - expected result.
00:13:51.671   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths'
00:13:51.671  test case2: host connect to nvmf target in multiple paths
00:13:51.671   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:13:51.671   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:51.671   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:13:51.671  [2024-12-09 23:55:07.326201] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:13:51.671   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:51.671   23:55:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:13:53.049   23:55:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421
00:13:53.985   23:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME
00:13:53.985   23:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0
00:13:53.985   23:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:13:53.985   23:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:13:53.985   23:55:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2
00:13:55.888   23:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:13:55.888    23:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:13:55.888    23:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:13:55.888   23:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:13:55.888   23:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:13:55.888   23:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0
00:13:55.888   23:55:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:13:55.888  [global]
00:13:55.888  thread=1
00:13:55.888  invalidate=1
00:13:55.888  rw=write
00:13:55.888  time_based=1
00:13:55.888  runtime=1
00:13:55.888  ioengine=libaio
00:13:55.888  direct=1
00:13:55.888  bs=4096
00:13:55.888  iodepth=1
00:13:55.888  norandommap=0
00:13:55.888  numjobs=1
00:13:55.888  
00:13:55.888  verify_dump=1
00:13:55.888  verify_backlog=512
00:13:55.888  verify_state_save=0
00:13:55.888  do_verify=1
00:13:55.888  verify=crc32c-intel
00:13:55.888  [job0]
00:13:55.888  filename=/dev/nvme0n1
00:13:55.888  Could not set queue depth (nvme0n1)
00:13:56.147  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:13:56.147  fio-3.35
00:13:56.147  Starting 1 thread
00:13:57.584  
00:13:57.584  job0: (groupid=0, jobs=1): err= 0: pid=2975453: Mon Dec  9 23:55:13 2024
00:13:57.584    read: IOPS=158, BW=635KiB/s (650kB/s)(640KiB/1008msec)
00:13:57.584      slat (nsec): min=7296, max=29767, avg=10145.39, stdev=5118.52
00:13:57.584      clat (usec): min=178, max=42903, avg=5571.78, stdev=13799.35
00:13:57.584       lat (usec): min=186, max=42932, avg=5581.92, stdev=13801.97
00:13:57.584      clat percentiles (usec):
00:13:57.584       |  1.00th=[  182],  5.00th=[  198], 10.00th=[  204], 20.00th=[  208],
00:13:57.584       | 30.00th=[  210], 40.00th=[  212], 50.00th=[  215], 60.00th=[  221],
00:13:57.584       | 70.00th=[  233], 80.00th=[  247], 90.00th=[40633], 95.00th=[41157],
00:13:57.584       | 99.00th=[41681], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730],
00:13:57.584       | 99.99th=[42730]
00:13:57.584    write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets
00:13:57.584      slat (usec): min=10, max=24818, avg=60.18, stdev=1096.30
00:13:57.584      clat (usec): min=118, max=327, avg=158.76, stdev=18.39
00:13:57.584       lat (usec): min=129, max=25146, avg=218.94, stdev=1103.93
00:13:57.584      clat percentiles (usec):
00:13:57.584       |  1.00th=[  122],  5.00th=[  129], 10.00th=[  137], 20.00th=[  151],
00:13:57.584       | 30.00th=[  153], 40.00th=[  155], 50.00th=[  159], 60.00th=[  161],
00:13:57.584       | 70.00th=[  165], 80.00th=[  169], 90.00th=[  176], 95.00th=[  182],
00:13:57.584       | 99.00th=[  206], 99.50th=[  219], 99.90th=[  330], 99.95th=[  330],
00:13:57.584       | 99.99th=[  330]
00:13:57.584     bw (  KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1
00:13:57.584     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:13:57.584    lat (usec)   : 250=95.39%, 500=1.34%
00:13:57.584    lat (msec)   : 2=0.15%, 50=3.12%
00:13:57.584    cpu          : usr=0.40%, sys=1.19%, ctx=674, majf=0, minf=1
00:13:57.584    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:13:57.584       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:13:57.584       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:13:57.584       issued rwts: total=160,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:13:57.584       latency   : target=0, window=0, percentile=100.00%, depth=1
00:13:57.584  
00:13:57.584  Run status group 0 (all jobs):
00:13:57.584     READ: bw=635KiB/s (650kB/s), 635KiB/s-635KiB/s (650kB/s-650kB/s), io=640KiB (655kB), run=1008-1008msec
00:13:57.584    WRITE: bw=2032KiB/s (2081kB/s), 2032KiB/s-2032KiB/s (2081kB/s-2081kB/s), io=2048KiB (2097kB), run=1008-1008msec
00:13:57.584  
00:13:57.584  Disk stats (read/write):
00:13:57.584    nvme0n1: ios=182/512, merge=0/0, ticks=1755/73, in_queue=1828, util=98.60%
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:13:57.584  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s)
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20}
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:13:57.584  rmmod nvme_tcp
00:13:57.584  rmmod nvme_fabrics
00:13:57.584  rmmod nvme_keyring
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2974397 ']'
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2974397
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2974397 ']'
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2974397
00:13:57.584    23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:57.584    23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2974397
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2974397'
00:13:57.584  killing process with pid 2974397
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2974397
00:13:57.584   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2974397
00:13:57.844   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:13:57.844   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:13:57.844   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:13:57.844   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr
00:13:57.844   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save
00:13:57.844   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:13:57.844   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore
00:13:57.844   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:13:57.844   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns
00:13:57.844   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:13:57.844   23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:13:57.844    23:55:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:00.381   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:14:00.381  
00:14:00.381  real	0m15.538s
00:14:00.381  user	0m35.542s
00:14:00.381  sys	0m5.313s
00:14:00.381   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:00.381   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:14:00.381  ************************************
00:14:00.381  END TEST nvmf_nmic
00:14:00.381  ************************************
00:14:00.382   23:55:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp
00:14:00.382   23:55:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:14:00.382   23:55:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:00.382   23:55:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:14:00.382  ************************************
00:14:00.382  START TEST nvmf_fio_target
00:14:00.382  ************************************
00:14:00.382   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp
00:14:00.382  * Looking for test storage...
00:14:00.382  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-:
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-:
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<'
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:00.382  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:00.382  		--rc genhtml_branch_coverage=1
00:14:00.382  		--rc genhtml_function_coverage=1
00:14:00.382  		--rc genhtml_legend=1
00:14:00.382  		--rc geninfo_all_blocks=1
00:14:00.382  		--rc geninfo_unexecuted_blocks=1
00:14:00.382  		
00:14:00.382  		'
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:00.382  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:00.382  		--rc genhtml_branch_coverage=1
00:14:00.382  		--rc genhtml_function_coverage=1
00:14:00.382  		--rc genhtml_legend=1
00:14:00.382  		--rc geninfo_all_blocks=1
00:14:00.382  		--rc geninfo_unexecuted_blocks=1
00:14:00.382  		
00:14:00.382  		'
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:00.382  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:00.382  		--rc genhtml_branch_coverage=1
00:14:00.382  		--rc genhtml_function_coverage=1
00:14:00.382  		--rc genhtml_legend=1
00:14:00.382  		--rc geninfo_all_blocks=1
00:14:00.382  		--rc geninfo_unexecuted_blocks=1
00:14:00.382  		
00:14:00.382  		'
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:00.382  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:00.382  		--rc genhtml_branch_coverage=1
00:14:00.382  		--rc genhtml_function_coverage=1
00:14:00.382  		--rc genhtml_legend=1
00:14:00.382  		--rc geninfo_all_blocks=1
00:14:00.382  		--rc geninfo_unexecuted_blocks=1
00:14:00.382  		
00:14:00.382  		'
00:14:00.382   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:14:00.382    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:00.382     23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:00.382      23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:00.383      23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:00.383      23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:00.383      23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH
00:14:00.383      23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:00.383    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0
00:14:00.383    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:14:00.383    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:14:00.383    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:14:00.383    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:14:00.383    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:14:00.383    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:14:00.383  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:14:00.383    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:14:00.383    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:14:00.383    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:14:00.383    23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable
00:14:00.383   23:55:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=()
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=()
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=()
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=()
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=()
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=()
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=()
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:14:06.960  Found 0000:af:00.0 (0x8086 - 0x159b)
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:14:06.960  Found 0000:af:00.1 (0x8086 - 0x159b)
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:14:06.960  Found net devices under 0000:af:00.0: cvl_0_0
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:14:06.960   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:14:06.961  Found net devices under 0000:af:00.1: cvl_0_1
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:14:06.961  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:14:06.961  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms
00:14:06.961  
00:14:06.961  --- 10.0.0.2 ping statistics ---
00:14:06.961  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:06.961  rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:14:06.961  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:14:06.961  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms
00:14:06.961  
00:14:06.961  --- 10.0.0.1 ping statistics ---
00:14:06.961  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:06.961  rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2979159
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2979159
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2979159 ']'
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:06.961  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:06.961   23:55:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:14:06.961  [2024-12-09 23:55:21.948238] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:14:06.961  [2024-12-09 23:55:21.948287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:06.961  [2024-12-09 23:55:22.028205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:14:06.961  [2024-12-09 23:55:22.067750] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:14:06.961  [2024-12-09 23:55:22.067788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:14:06.961  [2024-12-09 23:55:22.067796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:14:06.961  [2024-12-09 23:55:22.067801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:14:06.961  [2024-12-09 23:55:22.067806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:14:06.961  [2024-12-09 23:55:22.069114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:14:06.961  [2024-12-09 23:55:22.069234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:14:06.961  [2024-12-09 23:55:22.069269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:06.961  [2024-12-09 23:55:22.069270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:14:06.961   23:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:06.961   23:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0
00:14:06.961   23:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:14:06.961   23:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:14:06.961   23:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:14:06.961   23:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:14:06.962   23:55:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:14:07.221  [2024-12-09 23:55:22.981120] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:14:07.221    23:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:14:07.480   23:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 '
00:14:07.480    23:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:14:07.740   23:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1
00:14:07.740    23:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:14:07.999   23:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 '
00:14:07.999    23:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:14:08.259   23:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3
00:14:08.259   23:55:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3'
00:14:08.259    23:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:14:08.518   23:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 '
00:14:08.518    23:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:14:08.777   23:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 '
00:14:08.777    23:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:14:09.036   23:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6
00:14:09.036   23:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6'
00:14:09.036   23:55:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:14:09.294   23:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:14:09.294   23:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:14:09.552   23:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:14:09.552   23:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:14:09.811   23:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:14:09.811  [2024-12-09 23:55:25.666585] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:14:10.069   23:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0
00:14:10.069   23:55:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0
00:14:10.328   23:55:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:14:11.705   23:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4
00:14:11.705   23:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0
00:14:11.705   23:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:14:11.705   23:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]]
00:14:11.705   23:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4
00:14:11.705   23:55:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2
00:14:13.609   23:55:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:14:13.609    23:55:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:14:13.609    23:55:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:14:13.609   23:55:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4
00:14:13.609   23:55:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:14:13.609   23:55:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0
00:14:13.609   23:55:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:14:13.609  [global]
00:14:13.609  thread=1
00:14:13.609  invalidate=1
00:14:13.609  rw=write
00:14:13.609  time_based=1
00:14:13.609  runtime=1
00:14:13.609  ioengine=libaio
00:14:13.609  direct=1
00:14:13.609  bs=4096
00:14:13.609  iodepth=1
00:14:13.609  norandommap=0
00:14:13.609  numjobs=1
00:14:13.609  
00:14:13.609  verify_dump=1
00:14:13.609  verify_backlog=512
00:14:13.609  verify_state_save=0
00:14:13.609  do_verify=1
00:14:13.609  verify=crc32c-intel
00:14:13.609  [job0]
00:14:13.609  filename=/dev/nvme0n1
00:14:13.609  [job1]
00:14:13.609  filename=/dev/nvme0n2
00:14:13.609  [job2]
00:14:13.609  filename=/dev/nvme0n3
00:14:13.609  [job3]
00:14:13.609  filename=/dev/nvme0n4
00:14:13.609  Could not set queue depth (nvme0n1)
00:14:13.609  Could not set queue depth (nvme0n2)
00:14:13.609  Could not set queue depth (nvme0n3)
00:14:13.609  Could not set queue depth (nvme0n4)
00:14:13.867  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:14:13.867  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:14:13.867  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:14:13.867  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:14:13.867  fio-3.35
00:14:13.867  Starting 4 threads
00:14:15.269  
00:14:15.269  job0: (groupid=0, jobs=1): err= 0: pid=2980693: Mon Dec  9 23:55:30 2024
00:14:15.269    read: IOPS=1344, BW=5378KiB/s (5507kB/s)(5480KiB/1019msec)
00:14:15.269      slat (nsec): min=6460, max=23118, avg=7613.04, stdev=1629.39
00:14:15.269      clat (usec): min=161, max=41484, avg=525.16, stdev=3466.09
00:14:15.269       lat (usec): min=168, max=41492, avg=532.77, stdev=3466.76
00:14:15.269      clat percentiles (usec):
00:14:15.269       |  1.00th=[  176],  5.00th=[  184], 10.00th=[  188], 20.00th=[  198],
00:14:15.269       | 30.00th=[  206], 40.00th=[  217], 50.00th=[  225], 60.00th=[  231],
00:14:15.269       | 70.00th=[  239], 80.00th=[  249], 90.00th=[  273], 95.00th=[  302],
00:14:15.269       | 99.00th=[  486], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681],
00:14:15.269       | 99.99th=[41681]
00:14:15.269    write: IOPS=1507, BW=6029KiB/s (6174kB/s)(6144KiB/1019msec); 0 zone resets
00:14:15.269      slat (nsec): min=9138, max=39808, avg=11048.26, stdev=2143.84
00:14:15.269      clat (usec): min=108, max=3923, avg=172.42, stdev=115.60
00:14:15.269       lat (usec): min=119, max=3933, avg=183.47, stdev=115.70
00:14:15.269      clat percentiles (usec):
00:14:15.269       |  1.00th=[  119],  5.00th=[  127], 10.00th=[  137], 20.00th=[  147],
00:14:15.269       | 30.00th=[  153], 40.00th=[  159], 50.00th=[  163], 60.00th=[  172],
00:14:15.269       | 70.00th=[  180], 80.00th=[  188], 90.00th=[  206], 95.00th=[  229],
00:14:15.269       | 99.00th=[  247], 99.50th=[  273], 99.90th=[ 2311], 99.95th=[ 3916],
00:14:15.269       | 99.99th=[ 3916]
00:14:15.269     bw (  KiB/s): min=  600, max=11688, per=38.70%, avg=6144.00, stdev=7840.40, samples=2
00:14:15.269     iops        : min=  150, max= 2922, avg=1536.00, stdev=1960.10, samples=2
00:14:15.269    lat (usec)   : 250=90.30%, 500=9.15%, 750=0.07%, 1000=0.07%
00:14:15.269    lat (msec)   : 4=0.07%, 50=0.34%
00:14:15.269    cpu          : usr=1.28%, sys=2.95%, ctx=2906, majf=0, minf=1
00:14:15.269    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:14:15.270       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:15.270       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:15.270       issued rwts: total=1370,1536,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:15.270       latency   : target=0, window=0, percentile=100.00%, depth=1
00:14:15.270  job1: (groupid=0, jobs=1): err= 0: pid=2980694: Mon Dec  9 23:55:30 2024
00:14:15.270    read: IOPS=518, BW=2072KiB/s (2122kB/s)(2116KiB/1021msec)
00:14:15.270      slat (nsec): min=6576, max=23954, avg=7757.00, stdev=2906.03
00:14:15.270      clat (usec): min=177, max=41122, avg=1529.26, stdev=7193.04
00:14:15.270       lat (usec): min=184, max=41146, avg=1537.02, stdev=7195.77
00:14:15.270      clat percentiles (usec):
00:14:15.270       |  1.00th=[  186],  5.00th=[  196], 10.00th=[  200], 20.00th=[  206],
00:14:15.270       | 30.00th=[  212], 40.00th=[  217], 50.00th=[  221], 60.00th=[  225],
00:14:15.270       | 70.00th=[  229], 80.00th=[  235], 90.00th=[  245], 95.00th=[  253],
00:14:15.270       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:14:15.270       | 99.99th=[41157]
00:14:15.270    write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets
00:14:15.270      slat (nsec): min=9676, max=43915, avg=11202.27, stdev=2187.15
00:14:15.270      clat (usec): min=116, max=3767, avg=187.88, stdev=119.91
00:14:15.270       lat (usec): min=127, max=3777, avg=199.08, stdev=120.07
00:14:15.270      clat percentiles (usec):
00:14:15.270       |  1.00th=[  123],  5.00th=[  129], 10.00th=[  133], 20.00th=[  139],
00:14:15.270       | 30.00th=[  147], 40.00th=[  165], 50.00th=[  190], 60.00th=[  204],
00:14:15.270       | 70.00th=[  212], 80.00th=[  225], 90.00th=[  239], 95.00th=[  245],
00:14:15.270       | 99.00th=[  285], 99.50th=[  306], 99.90th=[  453], 99.95th=[ 3752],
00:14:15.270       | 99.99th=[ 3752]
00:14:15.270     bw (  KiB/s): min= 8192, max= 8192, per=51.60%, avg=8192.00, stdev= 0.00, samples=1
00:14:15.270     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:14:15.270    lat (usec)   : 250=95.30%, 500=3.54%
00:14:15.270    lat (msec)   : 4=0.06%, 50=1.09%
00:14:15.270    cpu          : usr=0.88%, sys=1.47%, ctx=1554, majf=0, minf=1
00:14:15.270    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:14:15.270       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:15.270       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:15.270       issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:15.270       latency   : target=0, window=0, percentile=100.00%, depth=1
00:14:15.270  job2: (groupid=0, jobs=1): err= 0: pid=2980696: Mon Dec  9 23:55:30 2024
00:14:15.270    read: IOPS=505, BW=2023KiB/s (2072kB/s)(2088KiB/1032msec)
00:14:15.270      slat (nsec): min=7266, max=26216, avg=8740.29, stdev=2691.23
00:14:15.270      clat (usec): min=182, max=41240, avg=1561.11, stdev=7239.97
00:14:15.270       lat (usec): min=190, max=41249, avg=1569.85, stdev=7242.15
00:14:15.270      clat percentiles (usec):
00:14:15.270       |  1.00th=[  186],  5.00th=[  194], 10.00th=[  200], 20.00th=[  210],
00:14:15.270       | 30.00th=[  221], 40.00th=[  227], 50.00th=[  233], 60.00th=[  237],
00:14:15.270       | 70.00th=[  245], 80.00th=[  260], 90.00th=[  285], 95.00th=[  297],
00:14:15.270       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:14:15.270       | 99.99th=[41157]
00:14:15.270    write: IOPS=992, BW=3969KiB/s (4064kB/s)(4096KiB/1032msec); 0 zone resets
00:14:15.270      slat (nsec): min=9461, max=38090, avg=11160.82, stdev=2407.61
00:14:15.270      clat (usec): min=120, max=522, avg=192.46, stdev=41.05
00:14:15.270       lat (usec): min=131, max=533, avg=203.62, stdev=41.10
00:14:15.270      clat percentiles (usec):
00:14:15.270       |  1.00th=[  126],  5.00th=[  133], 10.00th=[  139], 20.00th=[  147],
00:14:15.270       | 30.00th=[  163], 40.00th=[  188], 50.00th=[  200], 60.00th=[  208],
00:14:15.270       | 70.00th=[  217], 80.00th=[  225], 90.00th=[  235], 95.00th=[  247],
00:14:15.270       | 99.00th=[  281], 99.50th=[  322], 99.90th=[  465], 99.95th=[  523],
00:14:15.270       | 99.99th=[  523]
00:14:15.270     bw (  KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=2
00:14:15.270     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2
00:14:15.270    lat (usec)   : 250=89.78%, 500=9.06%, 750=0.06%
00:14:15.270    lat (msec)   : 50=1.10%
00:14:15.270    cpu          : usr=1.07%, sys=1.36%, ctx=1546, majf=0, minf=2
00:14:15.270    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:14:15.270       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:15.270       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:15.270       issued rwts: total=522,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:15.270       latency   : target=0, window=0, percentile=100.00%, depth=1
00:14:15.270  job3: (groupid=0, jobs=1): err= 0: pid=2980698: Mon Dec  9 23:55:30 2024
00:14:15.270    read: IOPS=152, BW=609KiB/s (624kB/s)(620KiB/1018msec)
00:14:15.270      slat (nsec): min=7085, max=28778, avg=10338.73, stdev=4941.07
00:14:15.270      clat (usec): min=200, max=42023, avg=5937.01, stdev=14024.42
00:14:15.270       lat (usec): min=208, max=42046, avg=5947.35, stdev=14027.61
00:14:15.270      clat percentiles (usec):
00:14:15.270       |  1.00th=[  210],  5.00th=[  227], 10.00th=[  239], 20.00th=[  251],
00:14:15.270       | 30.00th=[  260], 40.00th=[  265], 50.00th=[  277], 60.00th=[  281],
00:14:15.270       | 70.00th=[  322], 80.00th=[  367], 90.00th=[41157], 95.00th=[41157],
00:14:15.270       | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206],
00:14:15.270       | 99.99th=[42206]
00:14:15.270    write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets
00:14:15.270      slat (nsec): min=10949, max=38153, avg=12488.34, stdev=2265.28
00:14:15.270      clat (usec): min=142, max=354, avg=171.11, stdev=16.97
00:14:15.270       lat (usec): min=153, max=392, avg=183.59, stdev=17.61
00:14:15.270      clat percentiles (usec):
00:14:15.270       |  1.00th=[  147],  5.00th=[  153], 10.00th=[  155], 20.00th=[  159],
00:14:15.270       | 30.00th=[  163], 40.00th=[  165], 50.00th=[  169], 60.00th=[  174],
00:14:15.270       | 70.00th=[  178], 80.00th=[  182], 90.00th=[  190], 95.00th=[  200],
00:14:15.270       | 99.00th=[  219], 99.50th=[  231], 99.90th=[  355], 99.95th=[  355],
00:14:15.270       | 99.99th=[  355]
00:14:15.270     bw (  KiB/s): min= 4096, max= 4096, per=25.80%, avg=4096.00, stdev= 0.00, samples=1
00:14:15.270     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:14:15.270    lat (usec)   : 250=81.11%, 500=15.29%
00:14:15.270    lat (msec)   : 2=0.30%, 20=0.15%, 50=3.15%
00:14:15.270    cpu          : usr=0.29%, sys=0.88%, ctx=667, majf=0, minf=1
00:14:15.270    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:14:15.270       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:15.270       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:15.270       issued rwts: total=155,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:15.270       latency   : target=0, window=0, percentile=100.00%, depth=1
00:14:15.270  
00:14:15.270  Run status group 0 (all jobs):
00:14:15.270     READ: bw=9984KiB/s (10.2MB/s), 609KiB/s-5378KiB/s (624kB/s-5507kB/s), io=10.1MiB (10.6MB), run=1018-1032msec
00:14:15.270    WRITE: bw=15.5MiB/s (16.3MB/s), 2012KiB/s-6029KiB/s (2060kB/s-6174kB/s), io=16.0MiB (16.8MB), run=1018-1032msec
00:14:15.270  
00:14:15.270  Disk stats (read/write):
00:14:15.270    nvme0n1: ios=1413/1536, merge=0/0, ticks=524/260, in_queue=784, util=86.67%
00:14:15.270    nvme0n2: ios=575/1024, merge=0/0, ticks=1444/184, in_queue=1628, util=98.58%
00:14:15.270    nvme0n3: ios=517/1024, merge=0/0, ticks=609/193, in_queue=802, util=89.06%
00:14:15.270    nvme0n4: ios=157/512, merge=0/0, ticks=1008/86, in_queue=1094, util=91.08%
00:14:15.270   23:55:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v
00:14:15.270  [global]
00:14:15.270  thread=1
00:14:15.270  invalidate=1
00:14:15.270  rw=randwrite
00:14:15.270  time_based=1
00:14:15.270  runtime=1
00:14:15.270  ioengine=libaio
00:14:15.270  direct=1
00:14:15.270  bs=4096
00:14:15.270  iodepth=1
00:14:15.270  norandommap=0
00:14:15.270  numjobs=1
00:14:15.270  
00:14:15.270  verify_dump=1
00:14:15.270  verify_backlog=512
00:14:15.270  verify_state_save=0
00:14:15.270  do_verify=1
00:14:15.270  verify=crc32c-intel
00:14:15.270  [job0]
00:14:15.270  filename=/dev/nvme0n1
00:14:15.270  [job1]
00:14:15.270  filename=/dev/nvme0n2
00:14:15.270  [job2]
00:14:15.270  filename=/dev/nvme0n3
00:14:15.270  [job3]
00:14:15.270  filename=/dev/nvme0n4
00:14:15.270  Could not set queue depth (nvme0n1)
00:14:15.270  Could not set queue depth (nvme0n2)
00:14:15.270  Could not set queue depth (nvme0n3)
00:14:15.270  Could not set queue depth (nvme0n4)
00:14:15.531  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:14:15.531  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:14:15.531  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:14:15.531  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:14:15.531  fio-3.35
00:14:15.531  Starting 4 threads
00:14:16.902  
00:14:16.902  job0: (groupid=0, jobs=1): err= 0: pid=2981066: Mon Dec  9 23:55:32 2024
00:14:16.902    read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec)
00:14:16.902      slat (nsec): min=5983, max=26527, avg=7672.57, stdev=2150.04
00:14:16.902      clat (usec): min=167, max=41403, avg=764.55, stdev=4728.97
00:14:16.902       lat (usec): min=174, max=41410, avg=772.22, stdev=4729.82
00:14:16.902      clat percentiles (usec):
00:14:16.902       |  1.00th=[  176],  5.00th=[  182], 10.00th=[  186], 20.00th=[  190],
00:14:16.902       | 30.00th=[  194], 40.00th=[  198], 50.00th=[  200], 60.00th=[  202],
00:14:16.902       | 70.00th=[  206], 80.00th=[  212], 90.00th=[  229], 95.00th=[  334],
00:14:16.902       | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:14:16.902       | 99.99th=[41157]
00:14:16.902    write: IOPS=1337, BW=5351KiB/s (5479kB/s)(5356KiB/1001msec); 0 zone resets
00:14:16.902      slat (nsec): min=9173, max=63821, avg=10158.20, stdev=1946.34
00:14:16.902      clat (usec): min=109, max=355, avg=142.32, stdev=17.26
00:14:16.902       lat (usec): min=119, max=419, avg=152.48, stdev=17.83
00:14:16.902      clat percentiles (usec):
00:14:16.902       |  1.00th=[  116],  5.00th=[  120], 10.00th=[  123], 20.00th=[  127],
00:14:16.902       | 30.00th=[  131], 40.00th=[  137], 50.00th=[  143], 60.00th=[  147],
00:14:16.902       | 70.00th=[  151], 80.00th=[  157], 90.00th=[  165], 95.00th=[  169],
00:14:16.902       | 99.00th=[  186], 99.50th=[  194], 99.90th=[  210], 99.95th=[  355],
00:14:16.902       | 99.99th=[  355]
00:14:16.902     bw (  KiB/s): min= 4096, max= 4096, per=36.97%, avg=4096.00, stdev= 0.00, samples=1
00:14:16.902     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:14:16.902    lat (usec)   : 250=96.70%, 500=2.71%
00:14:16.902    lat (msec)   : 50=0.59%
00:14:16.902    cpu          : usr=1.00%, sys=2.30%, ctx=2364, majf=0, minf=1
00:14:16.902    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:14:16.902       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:16.902       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:16.902       issued rwts: total=1024,1339,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:16.902       latency   : target=0, window=0, percentile=100.00%, depth=1
00:14:16.902  job1: (groupid=0, jobs=1): err= 0: pid=2981067: Mon Dec  9 23:55:32 2024
00:14:16.902    read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec)
00:14:16.902      slat (nsec): min=10085, max=24112, avg=20949.70, stdev=3534.67
00:14:16.902      clat (usec): min=40743, max=41149, avg=40960.55, stdev=81.35
00:14:16.902       lat (usec): min=40753, max=41162, avg=40981.50, stdev=81.36
00:14:16.902      clat percentiles (usec):
00:14:16.902       |  1.00th=[40633],  5.00th=[40633], 10.00th=[40633], 20.00th=[41157],
00:14:16.902       | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
00:14:16.902       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:14:16.902       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:14:16.902       | 99.99th=[41157]
00:14:16.902    write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets
00:14:16.902      slat (nsec): min=10465, max=37381, avg=12908.10, stdev=2172.52
00:14:16.902      clat (usec): min=137, max=267, avg=169.79, stdev=17.09
00:14:16.902       lat (usec): min=148, max=295, avg=182.69, stdev=17.86
00:14:16.902      clat percentiles (usec):
00:14:16.902       |  1.00th=[  143],  5.00th=[  149], 10.00th=[  151], 20.00th=[  155],
00:14:16.902       | 30.00th=[  159], 40.00th=[  163], 50.00th=[  167], 60.00th=[  174],
00:14:16.902       | 70.00th=[  178], 80.00th=[  184], 90.00th=[  192], 95.00th=[  200],
00:14:16.902       | 99.00th=[  223], 99.50th=[  239], 99.90th=[  269], 99.95th=[  269],
00:14:16.902       | 99.99th=[  269]
00:14:16.902     bw (  KiB/s): min= 4096, max= 4096, per=36.97%, avg=4096.00, stdev= 0.00, samples=1
00:14:16.902     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:14:16.902    lat (usec)   : 250=95.33%, 500=0.37%
00:14:16.902    lat (msec)   : 50=4.30%
00:14:16.902    cpu          : usr=0.48%, sys=0.96%, ctx=535, majf=0, minf=1
00:14:16.902    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:14:16.902       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:16.902       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:16.902       issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:16.902       latency   : target=0, window=0, percentile=100.00%, depth=1
00:14:16.902  job2: (groupid=0, jobs=1): err= 0: pid=2981069: Mon Dec  9 23:55:32 2024
00:14:16.902    read: IOPS=389, BW=1557KiB/s (1594kB/s)(1560KiB/1002msec)
00:14:16.902      slat (nsec): min=5972, max=25031, avg=8370.47, stdev=3583.90
00:14:16.902      clat (usec): min=191, max=41512, avg=2274.75, stdev=8866.26
00:14:16.902       lat (usec): min=199, max=41520, avg=2283.12, stdev=8867.09
00:14:16.902      clat percentiles (usec):
00:14:16.902       |  1.00th=[  196],  5.00th=[  200], 10.00th=[  202], 20.00th=[  208],
00:14:16.902       | 30.00th=[  212], 40.00th=[  217], 50.00th=[  221], 60.00th=[  225],
00:14:16.902       | 70.00th=[  229], 80.00th=[  237], 90.00th=[  249], 95.00th=[30016],
00:14:16.902       | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681],
00:14:16.902       | 99.99th=[41681]
00:14:16.902    write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets
00:14:16.902      slat (nsec): min=9744, max=44085, avg=11142.59, stdev=2450.78
00:14:16.902      clat (usec): min=139, max=928, avg=198.45, stdev=62.38
00:14:16.902       lat (usec): min=148, max=942, avg=209.60, stdev=62.71
00:14:16.902      clat percentiles (usec):
00:14:16.902       |  1.00th=[  143],  5.00th=[  151], 10.00th=[  159], 20.00th=[  174],
00:14:16.902       | 30.00th=[  180], 40.00th=[  186], 50.00th=[  192], 60.00th=[  198],
00:14:16.902       | 70.00th=[  202], 80.00th=[  210], 90.00th=[  227], 95.00th=[  269],
00:14:16.902       | 99.00th=[  297], 99.50th=[  750], 99.90th=[  930], 99.95th=[  930],
00:14:16.902       | 99.99th=[  930]
00:14:16.902     bw (  KiB/s): min= 4096, max= 4096, per=36.97%, avg=4096.00, stdev= 0.00, samples=1
00:14:16.902     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:14:16.902    lat (usec)   : 250=91.69%, 500=5.65%, 750=0.22%, 1000=0.22%
00:14:16.902    lat (msec)   : 50=2.22%
00:14:16.902    cpu          : usr=0.50%, sys=0.80%, ctx=905, majf=0, minf=1
00:14:16.902    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:14:16.902       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:16.902       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:16.902       issued rwts: total=390,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:16.902       latency   : target=0, window=0, percentile=100.00%, depth=1
00:14:16.902  job3: (groupid=0, jobs=1): err= 0: pid=2981070: Mon Dec  9 23:55:32 2024
00:14:16.902    read: IOPS=35, BW=142KiB/s (145kB/s)(144KiB/1015msec)
00:14:16.902      slat (nsec): min=7103, max=25339, avg=17831.28, stdev=8236.45
00:14:16.902      clat (usec): min=234, max=41426, avg=25153.73, stdev=20128.35
00:14:16.902       lat (usec): min=242, max=41437, avg=25171.56, stdev=20135.69
00:14:16.902      clat percentiles (usec):
00:14:16.902       |  1.00th=[  235],  5.00th=[  243], 10.00th=[  258], 20.00th=[  265],
00:14:16.902       | 30.00th=[  269], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157],
00:14:16.902       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:14:16.902       | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681],
00:14:16.902       | 99.99th=[41681]
00:14:16.902    write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets
00:14:16.902      slat (nsec): min=9684, max=36984, avg=10835.02, stdev=2082.46
00:14:16.902      clat (usec): min=137, max=956, avg=198.98, stdev=57.04
00:14:16.902       lat (usec): min=147, max=967, avg=209.82, stdev=57.21
00:14:16.902      clat percentiles (usec):
00:14:16.902       |  1.00th=[  143],  5.00th=[  153], 10.00th=[  161], 20.00th=[  174],
00:14:16.902       | 30.00th=[  180], 40.00th=[  186], 50.00th=[  192], 60.00th=[  202],
00:14:16.902       | 70.00th=[  208], 80.00th=[  210], 90.00th=[  229], 95.00th=[  258],
00:14:16.902       | 99.00th=[  314], 99.50th=[  660], 99.90th=[  955], 99.95th=[  955],
00:14:16.902       | 99.99th=[  955]
00:14:16.902     bw (  KiB/s): min= 4096, max= 4096, per=36.97%, avg=4096.00, stdev= 0.00, samples=1
00:14:16.902     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:14:16.903    lat (usec)   : 250=88.32%, 500=6.93%, 750=0.36%, 1000=0.36%
00:14:16.903    lat (msec)   : 50=4.01%
00:14:16.903    cpu          : usr=0.39%, sys=0.49%, ctx=549, majf=0, minf=1
00:14:16.903    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:14:16.903       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:16.903       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:16.903       issued rwts: total=36,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:16.903       latency   : target=0, window=0, percentile=100.00%, depth=1
00:14:16.903  
00:14:16.903  Run status group 0 (all jobs):
00:14:16.903     READ: bw=5676KiB/s (5813kB/s), 88.6KiB/s-4092KiB/s (90.8kB/s-4190kB/s), io=5892KiB (6033kB), run=1001-1038msec
00:14:16.903    WRITE: bw=10.8MiB/s (11.3MB/s), 1973KiB/s-5351KiB/s (2020kB/s-5479kB/s), io=11.2MiB (11.8MB), run=1001-1038msec
00:14:16.903  
00:14:16.903  Disk stats (read/write):
00:14:16.903    nvme0n1: ios=612/1024, merge=0/0, ticks=753/139, in_queue=892, util=87.76%
00:14:16.903    nvme0n2: ios=67/512, merge=0/0, ticks=758/87, in_queue=845, util=88.02%
00:14:16.903    nvme0n3: ios=99/512, merge=0/0, ticks=1650/99, in_queue=1749, util=94.17%
00:14:16.903    nvme0n4: ios=76/512, merge=0/0, ticks=944/94, in_queue=1038, util=98.43%
00:14:16.903   23:55:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v
00:14:16.903  [global]
00:14:16.903  thread=1
00:14:16.903  invalidate=1
00:14:16.903  rw=write
00:14:16.903  time_based=1
00:14:16.903  runtime=1
00:14:16.903  ioengine=libaio
00:14:16.903  direct=1
00:14:16.903  bs=4096
00:14:16.903  iodepth=128
00:14:16.903  norandommap=0
00:14:16.903  numjobs=1
00:14:16.903  
00:14:16.903  verify_dump=1
00:14:16.903  verify_backlog=512
00:14:16.903  verify_state_save=0
00:14:16.903  do_verify=1
00:14:16.903  verify=crc32c-intel
00:14:16.903  [job0]
00:14:16.903  filename=/dev/nvme0n1
00:14:16.903  [job1]
00:14:16.903  filename=/dev/nvme0n2
00:14:16.903  [job2]
00:14:16.903  filename=/dev/nvme0n3
00:14:16.903  [job3]
00:14:16.903  filename=/dev/nvme0n4
00:14:16.903  Could not set queue depth (nvme0n1)
00:14:16.903  Could not set queue depth (nvme0n2)
00:14:16.903  Could not set queue depth (nvme0n3)
00:14:16.903  Could not set queue depth (nvme0n4)
00:14:17.159  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:14:17.159  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:14:17.159  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:14:17.159  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:14:17.159  fio-3.35
00:14:17.159  Starting 4 threads
00:14:18.532  
00:14:18.532  job0: (groupid=0, jobs=1): err= 0: pid=2981438: Mon Dec  9 23:55:34 2024
00:14:18.532    read: IOPS=4912, BW=19.2MiB/s (20.1MB/s)(19.4MiB/1009msec)
00:14:18.532      slat (nsec): min=1641, max=12394k, avg=96273.47, stdev=715015.95
00:14:18.532      clat (usec): min=3588, max=35995, avg=12153.36, stdev=3702.62
00:14:18.532       lat (usec): min=3601, max=35999, avg=12249.63, stdev=3765.16
00:14:18.532      clat percentiles (usec):
00:14:18.532       |  1.00th=[ 5932],  5.00th=[ 9110], 10.00th=[ 9241], 20.00th=[ 9503],
00:14:18.532       | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[11469], 60.00th=[11731],
00:14:18.532       | 70.00th=[12780], 80.00th=[15008], 90.00th=[17171], 95.00th=[18482],
00:14:18.532       | 99.00th=[24511], 99.50th=[30540], 99.90th=[35914], 99.95th=[35914],
00:14:18.532       | 99.99th=[35914]
00:14:18.532    write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets
00:14:18.532      slat (usec): min=2, max=13056, avg=90.90, stdev=603.11
00:14:18.532      clat (usec): min=1520, max=94379, avg=13222.96, stdev=12286.41
00:14:18.532       lat (usec): min=1534, max=94391, avg=13313.86, stdev=12361.91
00:14:18.532      clat percentiles (usec):
00:14:18.532       |  1.00th=[ 3687],  5.00th=[ 5669], 10.00th=[ 6521], 20.00th=[ 7767],
00:14:18.532       | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10421], 60.00th=[10945],
00:14:18.532       | 70.00th=[11469], 80.00th=[11863], 90.00th=[23725], 95.00th=[29492],
00:14:18.532       | 99.00th=[84411], 99.50th=[90702], 99.90th=[94897], 99.95th=[94897],
00:14:18.532       | 99.99th=[94897]
00:14:18.532     bw (  KiB/s): min=16384, max=24576, per=29.34%, avg=20480.00, stdev=5792.62, samples=2
00:14:18.532     iops        : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2
00:14:18.532    lat (msec)   : 2=0.02%, 4=0.82%, 10=36.60%, 20=55.92%, 50=5.23%
00:14:18.532    lat (msec)   : 100=1.41%
00:14:18.532    cpu          : usr=3.87%, sys=6.65%, ctx=494, majf=0, minf=1
00:14:18.532    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
00:14:18.532       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:18.532       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:14:18.532       issued rwts: total=4957,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:18.532       latency   : target=0, window=0, percentile=100.00%, depth=128
00:14:18.532  job1: (groupid=0, jobs=1): err= 0: pid=2981439: Mon Dec  9 23:55:34 2024
00:14:18.532    read: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec)
00:14:18.532      slat (nsec): min=1117, max=3270.9k, avg=78611.34, stdev=391032.19
00:14:18.532      clat (usec): min=6426, max=33497, avg=10368.01, stdev=2558.74
00:14:18.532       lat (usec): min=6543, max=35677, avg=10446.62, stdev=2543.18
00:14:18.532      clat percentiles (usec):
00:14:18.532       |  1.00th=[ 7046],  5.00th=[ 8029], 10.00th=[ 8455], 20.00th=[ 8717],
00:14:18.532       | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[10552],
00:14:18.532       | 70.00th=[10814], 80.00th=[11469], 90.00th=[12256], 95.00th=[12780],
00:14:18.532       | 99.00th=[25297], 99.50th=[31327], 99.90th=[32375], 99.95th=[33424],
00:14:18.532       | 99.99th=[33424]
00:14:18.532    write: IOPS=6284, BW=24.5MiB/s (25.7MB/s)(24.7MiB/1006msec); 0 zone resets
00:14:18.532      slat (nsec): min=1912, max=6722.0k, avg=78649.32, stdev=326516.31
00:14:18.532      clat (usec): min=4151, max=36134, avg=10038.84, stdev=2895.88
00:14:18.532       lat (usec): min=5090, max=36143, avg=10117.49, stdev=2907.71
00:14:18.532      clat percentiles (usec):
00:14:18.532       |  1.00th=[ 6587],  5.00th=[ 7046], 10.00th=[ 7963], 20.00th=[ 8455],
00:14:18.532       | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10028],
00:14:18.532       | 70.00th=[10421], 80.00th=[11469], 90.00th=[12125], 95.00th=[12911],
00:14:18.532       | 99.00th=[26608], 99.50th=[33162], 99.90th=[35914], 99.95th=[35914],
00:14:18.532       | 99.99th=[35914]
00:14:18.532     bw (  KiB/s): min=24576, max=24976, per=35.50%, avg=24776.00, stdev=282.84, samples=2
00:14:18.532     iops        : min= 6144, max= 6244, avg=6194.00, stdev=70.71, samples=2
00:14:18.532    lat (msec)   : 10=52.01%, 20=46.86%, 50=1.14%
00:14:18.532    cpu          : usr=2.19%, sys=4.28%, ctx=856, majf=0, minf=1
00:14:18.532    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5%
00:14:18.532       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:18.532       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:14:18.532       issued rwts: total=6144,6322,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:18.532       latency   : target=0, window=0, percentile=100.00%, depth=128
00:14:18.532  job2: (groupid=0, jobs=1): err= 0: pid=2981440: Mon Dec  9 23:55:34 2024
00:14:18.532    read: IOPS=2250, BW=9002KiB/s (9218kB/s)(9452KiB/1050msec)
00:14:18.532      slat (nsec): min=1528, max=40761k, avg=221079.83, stdev=1779374.32
00:14:18.532      clat (usec): min=7754, max=91275, avg=28216.98, stdev=19020.30
00:14:18.532       lat (usec): min=7764, max=94013, avg=28438.06, stdev=19193.03
00:14:18.532      clat percentiles (usec):
00:14:18.532       |  1.00th=[ 8586],  5.00th=[11338], 10.00th=[12649], 20.00th=[13173],
00:14:18.532       | 30.00th=[13304], 40.00th=[14877], 50.00th=[16188], 60.00th=[30540],
00:14:18.532       | 70.00th=[36963], 80.00th=[46924], 90.00th=[58459], 95.00th=[63701],
00:14:18.532       | 99.00th=[79168], 99.50th=[84411], 99.90th=[91751], 99.95th=[91751],
00:14:18.532       | 99.99th=[91751]
00:14:18.532    write: IOPS=2438, BW=9752KiB/s (9986kB/s)(10.0MiB/1050msec); 0 zone resets
00:14:18.532      slat (usec): min=2, max=12655, avg=182.73, stdev=813.32
00:14:18.532      clat (usec): min=6968, max=91291, avg=25810.36, stdev=17131.38
00:14:18.532       lat (usec): min=6983, max=96963, avg=25993.09, stdev=17208.28
00:14:18.532      clat percentiles (usec):
00:14:18.532       |  1.00th=[ 8979],  5.00th=[12518], 10.00th=[12649], 20.00th=[13042],
00:14:18.532       | 30.00th=[13566], 40.00th=[18220], 50.00th=[21890], 60.00th=[22414],
00:14:18.532       | 70.00th=[25822], 80.00th=[32900], 90.00th=[52691], 95.00th=[65799],
00:14:18.532       | 99.00th=[87557], 99.50th=[88605], 99.90th=[88605], 99.95th=[91751],
00:14:18.532       | 99.99th=[91751]
00:14:18.532     bw (  KiB/s): min= 7048, max=13432, per=14.67%, avg=10240.00, stdev=4514.17, samples=2
00:14:18.532     iops        : min= 1762, max= 3358, avg=2560.00, stdev=1128.54, samples=2
00:14:18.532    lat (msec)   : 10=2.44%, 20=45.01%, 50=41.15%, 100=11.40%
00:14:18.532    cpu          : usr=2.57%, sys=3.15%, ctx=319, majf=0, minf=1
00:14:18.532    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7%
00:14:18.532       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:18.532       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:14:18.532       issued rwts: total=2363,2560,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:18.532       latency   : target=0, window=0, percentile=100.00%, depth=128
00:14:18.532  job3: (groupid=0, jobs=1): err= 0: pid=2981441: Mon Dec  9 23:55:34 2024
00:14:18.532    read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec)
00:14:18.532      slat (nsec): min=1810, max=23509k, avg=133354.71, stdev=987547.40
00:14:18.532      clat (usec): min=4912, max=92918, avg=15753.55, stdev=11484.01
00:14:18.532       lat (usec): min=4919, max=92929, avg=15886.90, stdev=11588.55
00:14:18.532      clat percentiles (usec):
00:14:18.532       |  1.00th=[ 5800],  5.00th=[ 8225], 10.00th=[ 9241], 20.00th=[10421],
00:14:18.532       | 30.00th=[11076], 40.00th=[12256], 50.00th=[12780], 60.00th=[13042],
00:14:18.532       | 70.00th=[13698], 80.00th=[17957], 90.00th=[22938], 95.00th=[34866],
00:14:18.532       | 99.00th=[80217], 99.50th=[90702], 99.90th=[92799], 99.95th=[92799],
00:14:18.532       | 99.99th=[92799]
00:14:18.532    write: IOPS=4288, BW=16.8MiB/s (17.6MB/s)(16.9MiB/1007msec); 0 zone resets
00:14:18.532      slat (usec): min=2, max=9898, avg=88.24, stdev=468.53
00:14:18.532      clat (usec): min=348, max=92884, avg=14614.39, stdev=8789.30
00:14:18.532       lat (usec): min=369, max=92888, avg=14702.63, stdev=8832.39
00:14:18.532      clat percentiles (usec):
00:14:18.532       |  1.00th=[ 2180],  5.00th=[ 4293], 10.00th=[ 6521], 20.00th=[ 8979],
00:14:18.532       | 30.00th=[11076], 40.00th=[11338], 50.00th=[12387], 60.00th=[12911],
00:14:18.532       | 70.00th=[14091], 80.00th=[21365], 90.00th=[24511], 95.00th=[28705],
00:14:18.532       | 99.00th=[50070], 99.50th=[57934], 99.90th=[70779], 99.95th=[70779],
00:14:18.532       | 99.99th=[92799]
00:14:18.532     bw (  KiB/s): min=15624, max=17904, per=24.02%, avg=16764.00, stdev=1612.20, samples=2
00:14:18.532     iops        : min= 3906, max= 4476, avg=4191.00, stdev=403.05, samples=2
00:14:18.532    lat (usec)   : 500=0.04%, 750=0.06%, 1000=0.12%
00:14:18.532    lat (msec)   : 2=0.23%, 4=1.58%, 10=17.21%, 20=59.96%, 50=18.93%
00:14:18.532    lat (msec)   : 100=1.88%
00:14:18.532    cpu          : usr=3.28%, sys=6.66%, ctx=431, majf=0, minf=1
00:14:18.532    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
00:14:18.532       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:18.532       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:14:18.532       issued rwts: total=4096,4319,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:18.532       latency   : target=0, window=0, percentile=100.00%, depth=128
00:14:18.532  
00:14:18.532  Run status group 0 (all jobs):
00:14:18.532     READ: bw=65.3MiB/s (68.5MB/s), 9002KiB/s-23.9MiB/s (9218kB/s-25.0MB/s), io=68.6MiB (71.9MB), run=1006-1050msec
00:14:18.533    WRITE: bw=68.2MiB/s (71.5MB/s), 9752KiB/s-24.5MiB/s (9986kB/s-25.7MB/s), io=71.6MiB (75.0MB), run=1006-1050msec
00:14:18.533  
00:14:18.533  Disk stats (read/write):
00:14:18.533    nvme0n1: ios=4113/4503, merge=0/0, ticks=47388/57256, in_queue=104644, util=86.07%
00:14:18.533    nvme0n2: ios=5363/5632, merge=0/0, ticks=13995/12935, in_queue=26930, util=90.15%
00:14:18.533    nvme0n3: ios=2106/2367, merge=0/0, ticks=25047/27797, in_queue=52844, util=93.56%
00:14:18.533    nvme0n4: ios=3095/3463, merge=0/0, ticks=52749/52092, in_queue=104841, util=94.23%
00:14:18.533   23:55:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v
00:14:18.533  [global]
00:14:18.533  thread=1
00:14:18.533  invalidate=1
00:14:18.533  rw=randwrite
00:14:18.533  time_based=1
00:14:18.533  runtime=1
00:14:18.533  ioengine=libaio
00:14:18.533  direct=1
00:14:18.533  bs=4096
00:14:18.533  iodepth=128
00:14:18.533  norandommap=0
00:14:18.533  numjobs=1
00:14:18.533  
00:14:18.533  verify_dump=1
00:14:18.533  verify_backlog=512
00:14:18.533  verify_state_save=0
00:14:18.533  do_verify=1
00:14:18.533  verify=crc32c-intel
00:14:18.533  [job0]
00:14:18.533  filename=/dev/nvme0n1
00:14:18.533  [job1]
00:14:18.533  filename=/dev/nvme0n2
00:14:18.533  [job2]
00:14:18.533  filename=/dev/nvme0n3
00:14:18.533  [job3]
00:14:18.533  filename=/dev/nvme0n4
00:14:18.533  Could not set queue depth (nvme0n1)
00:14:18.533  Could not set queue depth (nvme0n2)
00:14:18.533  Could not set queue depth (nvme0n3)
00:14:18.533  Could not set queue depth (nvme0n4)
00:14:18.533  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:14:18.533  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:14:18.533  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:14:18.533  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:14:18.533  fio-3.35
00:14:18.533  Starting 4 threads
00:14:19.905  
00:14:19.905  job0: (groupid=0, jobs=1): err= 0: pid=2981803: Mon Dec  9 23:55:35 2024
00:14:19.906    read: IOPS=5696, BW=22.3MiB/s (23.3MB/s)(22.3MiB/1002msec)
00:14:19.906      slat (nsec): min=1404, max=19409k, avg=82796.88, stdev=568951.71
00:14:19.906      clat (usec): min=1459, max=28618, avg=11098.10, stdev=3088.50
00:14:19.906       lat (usec): min=1461, max=28645, avg=11180.90, stdev=3120.74
00:14:19.906      clat percentiles (usec):
00:14:19.906       |  1.00th=[ 6587],  5.00th=[ 7701], 10.00th=[ 8848], 20.00th=[ 9503],
00:14:19.906       | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290],
00:14:19.906       | 70.00th=[11207], 80.00th=[12256], 90.00th=[15533], 95.00th=[17957],
00:14:19.906       | 99.00th=[21365], 99.50th=[23200], 99.90th=[23987], 99.95th=[23987],
00:14:19.906       | 99.99th=[28705]
00:14:19.906    write: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets
00:14:19.906      slat (usec): min=2, max=8605, avg=75.71, stdev=489.32
00:14:19.906      clat (usec): min=1651, max=31263, avg=10364.38, stdev=2719.13
00:14:19.906       lat (usec): min=1810, max=31266, avg=10440.09, stdev=2769.09
00:14:19.906      clat percentiles (usec):
00:14:19.906       |  1.00th=[ 5080],  5.00th=[ 7242], 10.00th=[ 8094], 20.00th=[ 9110],
00:14:19.906       | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028],
00:14:19.906       | 70.00th=[10552], 80.00th=[11600], 90.00th=[13042], 95.00th=[15533],
00:14:19.906       | 99.00th=[22414], 99.50th=[25822], 99.90th=[28967], 99.95th=[31327],
00:14:19.906       | 99.99th=[31327]
00:14:19.906     bw (  KiB/s): min=22224, max=26520, per=32.92%, avg=24372.00, stdev=3037.73, samples=2
00:14:19.906     iops        : min= 5556, max= 6630, avg=6093.00, stdev=759.43, samples=2
00:14:19.906    lat (msec)   : 2=0.18%, 4=0.13%, 10=52.03%, 20=45.10%, 50=2.56%
00:14:19.906    cpu          : usr=4.90%, sys=7.69%, ctx=485, majf=0, minf=1
00:14:19.906    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5%
00:14:19.906       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:19.906       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:14:19.906       issued rwts: total=5708,6144,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:19.906       latency   : target=0, window=0, percentile=100.00%, depth=128
00:14:19.906  job1: (groupid=0, jobs=1): err= 0: pid=2981804: Mon Dec  9 23:55:35 2024
00:14:19.906    read: IOPS=2035, BW=8143KiB/s (8339kB/s)(8192KiB/1006msec)
00:14:19.906      slat (nsec): min=1379, max=16434k, avg=225632.50, stdev=1367365.20
00:14:19.906      clat (usec): min=7989, max=54240, avg=28514.86, stdev=10720.17
00:14:19.906       lat (usec): min=7995, max=54267, avg=28740.49, stdev=10843.28
00:14:19.906      clat percentiles (usec):
00:14:19.906       |  1.00th=[ 8029],  5.00th=[11600], 10.00th=[13435], 20.00th=[17433],
00:14:19.906       | 30.00th=[20317], 40.00th=[26346], 50.00th=[30016], 60.00th=[32113],
00:14:19.906       | 70.00th=[34341], 80.00th=[39584], 90.00th=[42730], 95.00th=[45876],
00:14:19.906       | 99.00th=[47973], 99.50th=[50070], 99.90th=[53740], 99.95th=[54264],
00:14:19.906       | 99.99th=[54264]
00:14:19.906    write: IOPS=2427, BW=9710KiB/s (9943kB/s)(9768KiB/1006msec); 0 zone resets
00:14:19.906      slat (usec): min=2, max=16373, avg=215.34, stdev=1055.84
00:14:19.906      clat (usec): min=1570, max=55549, avg=28329.51, stdev=14494.90
00:14:19.906       lat (usec): min=5713, max=57973, avg=28544.85, stdev=14586.85
00:14:19.906      clat percentiles (usec):
00:14:19.906       |  1.00th=[ 5735],  5.00th=[ 7898], 10.00th=[10028], 20.00th=[14091],
00:14:19.906       | 30.00th=[19268], 40.00th=[21103], 50.00th=[25560], 60.00th=[31589],
00:14:19.906       | 70.00th=[39060], 80.00th=[44303], 90.00th=[47973], 95.00th=[51643],
00:14:19.906       | 99.00th=[54264], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313],
00:14:19.906       | 99.99th=[55313]
00:14:19.906     bw (  KiB/s): min= 6912, max=11600, per=12.50%, avg=9256.00, stdev=3314.92, samples=2
00:14:19.906     iops        : min= 1728, max= 2900, avg=2314.00, stdev=828.73, samples=2
00:14:19.906    lat (msec)   : 2=0.02%, 10=5.90%, 20=24.59%, 50=64.94%, 100=4.54%
00:14:19.906    cpu          : usr=2.19%, sys=3.08%, ctx=282, majf=0, minf=1
00:14:19.906    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6%
00:14:19.906       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:19.906       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:14:19.906       issued rwts: total=2048,2442,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:19.906       latency   : target=0, window=0, percentile=100.00%, depth=128
00:14:19.906  job2: (groupid=0, jobs=1): err= 0: pid=2981805: Mon Dec  9 23:55:35 2024
00:14:19.906    read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec)
00:14:19.906      slat (nsec): min=1684, max=18887k, avg=109409.91, stdev=699186.56
00:14:19.906      clat (usec): min=7510, max=40963, avg=14493.03, stdev=4448.81
00:14:19.906       lat (usec): min=7518, max=40988, avg=14602.44, stdev=4492.60
00:14:19.906      clat percentiles (usec):
00:14:19.906       |  1.00th=[ 8160],  5.00th=[10290], 10.00th=[11338], 20.00th=[11731],
00:14:19.906       | 30.00th=[12649], 40.00th=[13435], 50.00th=[13698], 60.00th=[14091],
00:14:19.906       | 70.00th=[14615], 80.00th=[15270], 90.00th=[18482], 95.00th=[22152],
00:14:19.906       | 99.00th=[35390], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109],
00:14:19.906       | 99.99th=[41157]
00:14:19.906    write: IOPS=4491, BW=17.5MiB/s (18.4MB/s)(17.6MiB/1005msec); 0 zone resets
00:14:19.906      slat (usec): min=2, max=20892, avg=113.19, stdev=744.92
00:14:19.906      clat (usec): min=906, max=39512, avg=15084.66, stdev=4389.04
00:14:19.906       lat (usec): min=5517, max=39541, avg=15197.84, stdev=4447.38
00:14:19.906      clat percentiles (usec):
00:14:19.906       |  1.00th=[ 6063],  5.00th=[10945], 10.00th=[11338], 20.00th=[11731],
00:14:19.906       | 30.00th=[12125], 40.00th=[12911], 50.00th=[13435], 60.00th=[14615],
00:14:19.906       | 70.00th=[17171], 80.00th=[18744], 90.00th=[21365], 95.00th=[22938],
00:14:19.906       | 99.00th=[31589], 99.50th=[32375], 99.90th=[32375], 99.95th=[32375],
00:14:19.906       | 99.99th=[39584]
00:14:19.906     bw (  KiB/s): min=17128, max=17960, per=23.70%, avg=17544.00, stdev=588.31, samples=2
00:14:19.906     iops        : min= 4282, max= 4490, avg=4386.00, stdev=147.08, samples=2
00:14:19.906    lat (usec)   : 1000=0.01%
00:14:19.906    lat (msec)   : 10=2.86%, 20=85.24%, 50=11.89%
00:14:19.906    cpu          : usr=4.58%, sys=5.68%, ctx=360, majf=0, minf=1
00:14:19.906    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3%
00:14:19.906       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:19.906       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:14:19.906       issued rwts: total=4096,4514,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:19.906       latency   : target=0, window=0, percentile=100.00%, depth=128
00:14:19.906  job3: (groupid=0, jobs=1): err= 0: pid=2981807: Mon Dec  9 23:55:35 2024
00:14:19.906    read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec)
00:14:19.906      slat (nsec): min=1456, max=14836k, avg=96670.73, stdev=619466.58
00:14:19.906      clat (usec): min=4856, max=28531, avg=12585.01, stdev=3094.32
00:14:19.906       lat (usec): min=5070, max=28538, avg=12681.68, stdev=3122.90
00:14:19.906      clat percentiles (usec):
00:14:19.906       |  1.00th=[ 8160],  5.00th=[ 9634], 10.00th=[10159], 20.00th=[10683],
00:14:19.906       | 30.00th=[11076], 40.00th=[11469], 50.00th=[11600], 60.00th=[11863],
00:14:19.906       | 70.00th=[12256], 80.00th=[13566], 90.00th=[18482], 95.00th=[18744],
00:14:19.906       | 99.00th=[22938], 99.50th=[23462], 99.90th=[26346], 99.95th=[28443],
00:14:19.906       | 99.99th=[28443]
00:14:19.906    write: IOPS=5504, BW=21.5MiB/s (22.5MB/s)(21.6MiB/1003msec); 0 zone resets
00:14:19.906      slat (usec): min=2, max=19916, avg=78.78, stdev=508.17
00:14:19.906      clat (usec): min=649, max=36794, avg=11354.86, stdev=3665.52
00:14:19.906       lat (usec): min=658, max=36818, avg=11433.64, stdev=3696.36
00:14:19.906      clat percentiles (usec):
00:14:19.906       |  1.00th=[ 2474],  5.00th=[ 5669], 10.00th=[ 7111], 20.00th=[10159],
00:14:19.906       | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338],
00:14:19.906       | 70.00th=[11469], 80.00th=[11863], 90.00th=[13829], 95.00th=[18744],
00:14:19.906       | 99.00th=[25560], 99.50th=[27132], 99.90th=[29230], 99.95th=[29230],
00:14:19.906       | 99.99th=[36963]
00:14:19.906     bw (  KiB/s): min=20528, max=22624, per=29.14%, avg=21576.00, stdev=1482.10, samples=2
00:14:19.906     iops        : min= 5132, max= 5656, avg=5394.00, stdev=370.52, samples=2
00:14:19.906    lat (usec)   : 750=0.04%, 1000=0.08%
00:14:19.906    lat (msec)   : 2=0.10%, 4=0.78%, 10=12.71%, 20=82.08%, 50=4.20%
00:14:19.906    cpu          : usr=4.99%, sys=7.19%, ctx=438, majf=0, minf=1
00:14:19.906    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
00:14:19.906       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:19.906       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:14:19.906       issued rwts: total=5120,5521,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:19.906       latency   : target=0, window=0, percentile=100.00%, depth=128
00:14:19.906  
00:14:19.906  Run status group 0 (all jobs):
00:14:19.906     READ: bw=65.9MiB/s (69.1MB/s), 8143KiB/s-22.3MiB/s (8339kB/s-23.3MB/s), io=66.3MiB (69.5MB), run=1002-1006msec
00:14:19.906    WRITE: bw=72.3MiB/s (75.8MB/s), 9710KiB/s-24.0MiB/s (9943kB/s-25.1MB/s), io=72.7MiB (76.3MB), run=1002-1006msec
00:14:19.906  
00:14:19.906  Disk stats (read/write):
00:14:19.907    nvme0n1: ios=4813/5120, merge=0/0, ticks=33807/31973, in_queue=65780, util=97.09%
00:14:19.907    nvme0n2: ios=2040/2048, merge=0/0, ticks=20726/19197, in_queue=39923, util=86.79%
00:14:19.907    nvme0n3: ios=3531/3584, merge=0/0, ticks=24287/28513, in_queue=52800, util=88.02%
00:14:19.907    nvme0n4: ios=4403/4608, merge=0/0, ticks=31265/31477, in_queue=62742, util=98.01%
00:14:19.907   23:55:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync
00:14:19.907   23:55:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2982031
00:14:19.907   23:55:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10
00:14:19.907   23:55:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3
00:14:19.907  [global]
00:14:19.907  thread=1
00:14:19.907  invalidate=1
00:14:19.907  rw=read
00:14:19.907  time_based=1
00:14:19.907  runtime=10
00:14:19.907  ioengine=libaio
00:14:19.907  direct=1
00:14:19.907  bs=4096
00:14:19.907  iodepth=1
00:14:19.907  norandommap=1
00:14:19.907  numjobs=1
00:14:19.907  
00:14:19.907  [job0]
00:14:19.907  filename=/dev/nvme0n1
00:14:19.907  [job1]
00:14:19.907  filename=/dev/nvme0n2
00:14:19.907  [job2]
00:14:19.907  filename=/dev/nvme0n3
00:14:19.907  [job3]
00:14:19.907  filename=/dev/nvme0n4
00:14:19.907  Could not set queue depth (nvme0n1)
00:14:19.907  Could not set queue depth (nvme0n2)
00:14:19.907  Could not set queue depth (nvme0n3)
00:14:19.907  Could not set queue depth (nvme0n4)
00:14:20.165  job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:14:20.165  job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:14:20.165  job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:14:20.165  job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:14:20.165  fio-3.35
00:14:20.165  Starting 4 threads
00:14:23.444   23:55:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0
00:14:23.444  fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=42475520, buflen=4096
00:14:23.444  fio: pid=2982253, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:14:23.444   23:55:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0
00:14:23.444   23:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:14:23.444   23:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:14:23.444  fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=23531520, buflen=4096
00:14:23.444  fio: pid=2982248, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:14:23.702  fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=33673216, buflen=4096
00:14:23.702  fio: pid=2982213, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:14:23.702   23:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:14:23.702   23:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:14:23.702  fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=45670400, buflen=4096
00:14:23.702  fio: pid=2982229, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:14:23.702   23:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:14:23.702   23:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2
00:14:23.960  
00:14:23.960  job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2982213: Mon Dec  9 23:55:39 2024
00:14:23.960    read: IOPS=2530, BW=9.88MiB/s (10.4MB/s)(32.1MiB/3249msec)
00:14:23.960      slat (usec): min=6, max=16636, avg=12.00, stdev=252.34
00:14:23.960      clat (usec): min=153, max=41345, avg=378.72, stdev=2656.02
00:14:23.960       lat (usec): min=160, max=41357, avg=390.72, stdev=2668.28
00:14:23.960      clat percentiles (usec):
00:14:23.960       |  1.00th=[  172],  5.00th=[  180], 10.00th=[  186], 20.00th=[  192],
00:14:23.960       | 30.00th=[  196], 40.00th=[  198], 50.00th=[  202], 60.00th=[  206],
00:14:23.960       | 70.00th=[  210], 80.00th=[  217], 90.00th=[  225], 95.00th=[  237],
00:14:23.960       | 99.00th=[  289], 99.50th=[  379], 99.90th=[41157], 99.95th=[41157],
00:14:23.960       | 99.99th=[41157]
00:14:23.960     bw (  KiB/s): min=   96, max=19064, per=23.55%, avg=9626.00, stdev=8549.31, samples=6
00:14:23.960     iops        : min=   24, max= 4766, avg=2406.50, stdev=2137.33, samples=6
00:14:23.960    lat (usec)   : 250=96.41%, 500=3.13%, 750=0.01%
00:14:23.960    lat (msec)   : 4=0.01%, 50=0.43%
00:14:23.960    cpu          : usr=1.63%, sys=3.76%, ctx=8224, majf=0, minf=1
00:14:23.960    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:14:23.960       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:23.960       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:23.960       issued rwts: total=8222,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:23.960       latency   : target=0, window=0, percentile=100.00%, depth=1
00:14:23.960  job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2982229: Mon Dec  9 23:55:39 2024
00:14:23.960    read: IOPS=3211, BW=12.5MiB/s (13.2MB/s)(43.6MiB/3472msec)
00:14:23.960      slat (usec): min=2, max=17138, avg=16.02, stdev=336.82
00:14:23.960      clat (usec): min=146, max=41263, avg=291.59, stdev=1871.40
00:14:23.960       lat (usec): min=159, max=50801, avg=307.61, stdev=1920.21
00:14:23.960      clat percentiles (usec):
00:14:23.960       |  1.00th=[  172],  5.00th=[  182], 10.00th=[  188], 20.00th=[  192],
00:14:23.960       | 30.00th=[  196], 40.00th=[  200], 50.00th=[  204], 60.00th=[  208],
00:14:23.960       | 70.00th=[  212], 80.00th=[  217], 90.00th=[  225], 95.00th=[  233],
00:14:23.960       | 99.00th=[  269], 99.50th=[  285], 99.90th=[41157], 99.95th=[41157],
00:14:23.960       | 99.99th=[41157]
00:14:23.960     bw (  KiB/s): min=   96, max=18896, per=31.64%, avg=12937.33, stdev=8459.34, samples=6
00:14:23.960     iops        : min=   24, max= 4724, avg=3234.33, stdev=2114.83, samples=6
00:14:23.960    lat (usec)   : 250=97.96%, 500=1.81%
00:14:23.960    lat (msec)   : 50=0.22%
00:14:23.960    cpu          : usr=2.16%, sys=4.78%, ctx=11160, majf=0, minf=2
00:14:23.960    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:14:23.960       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:23.960       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:23.960       issued rwts: total=11151,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:23.960       latency   : target=0, window=0, percentile=100.00%, depth=1
00:14:23.960  job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2982248: Mon Dec  9 23:55:39 2024
00:14:23.960    read: IOPS=1889, BW=7557KiB/s (7738kB/s)(22.4MiB/3041msec)
00:14:23.960      slat (usec): min=6, max=17323, avg=10.62, stdev=228.44
00:14:23.960      clat (usec): min=188, max=42082, avg=513.78, stdev=3299.61
00:14:23.960       lat (usec): min=195, max=58597, avg=524.40, stdev=3344.81
00:14:23.960      clat percentiles (usec):
00:14:23.960       |  1.00th=[  198],  5.00th=[  206], 10.00th=[  210], 20.00th=[  219],
00:14:23.960       | 30.00th=[  225], 40.00th=[  231], 50.00th=[  237], 60.00th=[  245],
00:14:23.960       | 70.00th=[  253], 80.00th=[  265], 90.00th=[  285], 95.00th=[  297],
00:14:23.960       | 99.00th=[  445], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157],
00:14:23.960       | 99.99th=[42206]
00:14:23.960     bw (  KiB/s): min=  104, max=17152, per=22.40%, avg=9156.80, stdev=6704.03, samples=5
00:14:23.960     iops        : min=   26, max= 4288, avg=2289.20, stdev=1676.01, samples=5
00:14:23.960    lat (usec)   : 250=66.76%, 500=32.51%, 750=0.05%
00:14:23.960    lat (msec)   : 50=0.66%
00:14:23.961    cpu          : usr=0.43%, sys=1.81%, ctx=5750, majf=0, minf=2
00:14:23.961    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:14:23.961       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:23.961       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:23.961       issued rwts: total=5746,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:23.961       latency   : target=0, window=0, percentile=100.00%, depth=1
00:14:23.961  job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2982253: Mon Dec  9 23:55:39 2024
00:14:23.961    read: IOPS=3702, BW=14.5MiB/s (15.2MB/s)(40.5MiB/2801msec)
00:14:23.961      slat (nsec): min=6690, max=56932, avg=7558.15, stdev=1181.08
00:14:23.961      clat (usec): min=165, max=41353, avg=260.54, stdev=986.06
00:14:23.961       lat (usec): min=187, max=41360, avg=268.10, stdev=986.08
00:14:23.961      clat percentiles (usec):
00:14:23.961       |  1.00th=[  194],  5.00th=[  200], 10.00th=[  204], 20.00th=[  210],
00:14:23.961       | 30.00th=[  215], 40.00th=[  219], 50.00th=[  223], 60.00th=[  227],
00:14:23.961       | 70.00th=[  235], 80.00th=[  255], 90.00th=[  302], 95.00th=[  322],
00:14:23.961       | 99.00th=[  388], 99.50th=[  404], 99.90th=[  429], 99.95th=[40633],
00:14:23.961       | 99.99th=[41157]
00:14:23.961     bw (  KiB/s): min= 6192, max=17648, per=35.88%, avg=14670.40, stdev=4920.15, samples=5
00:14:23.961     iops        : min= 1548, max= 4412, avg=3667.60, stdev=1230.04, samples=5
00:14:23.961    lat (usec)   : 250=79.00%, 500=20.91%, 750=0.01%
00:14:23.961    lat (msec)   : 20=0.01%, 50=0.06%
00:14:23.961    cpu          : usr=0.86%, sys=3.54%, ctx=10374, majf=0, minf=2
00:14:23.961    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:14:23.961       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:23.961       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:23.961       issued rwts: total=10371,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:23.961       latency   : target=0, window=0, percentile=100.00%, depth=1
00:14:23.961  
00:14:23.961  Run status group 0 (all jobs):
00:14:23.961     READ: bw=39.9MiB/s (41.9MB/s), 7557KiB/s-14.5MiB/s (7738kB/s-15.2MB/s), io=139MiB (145MB), run=2801-3472msec
00:14:23.961  
00:14:23.961  Disk stats (read/write):
00:14:23.961    nvme0n1: ios=7656/0, merge=0/0, ticks=2895/0, in_queue=2895, util=94.73%
00:14:23.961    nvme0n2: ios=11175/0, merge=0/0, ticks=3461/0, in_queue=3461, util=97.10%
00:14:23.961    nvme0n3: ios=5739/0, merge=0/0, ticks=2773/0, in_queue=2773, util=96.49%
00:14:23.961    nvme0n4: ios=9627/0, merge=0/0, ticks=2467/0, in_queue=2467, util=96.41%
00:14:23.961   23:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:14:23.961   23:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3
00:14:24.218   23:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:14:24.218   23:55:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4
00:14:24.476   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:14:24.476   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5
00:14:24.734   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:14:24.734   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6
00:14:24.734   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0
00:14:24.734   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2982031
00:14:24.734   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4
00:14:24.734   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:14:24.992  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:14:24.992   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:14:24.992   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0
00:14:24.992   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:14:24.992   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:14:24.992   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:14:24.992   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:14:24.992   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0
00:14:24.992   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']'
00:14:24.992   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected'
00:14:24.992  nvmf hotplug test: fio failed as expected
00:14:24.992   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:14:25.250  rmmod nvme_tcp
00:14:25.250  rmmod nvme_fabrics
00:14:25.250  rmmod nvme_keyring
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e
00:14:25.250   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0
00:14:25.251   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2979159 ']'
00:14:25.251   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2979159
00:14:25.251   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2979159 ']'
00:14:25.251   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2979159
00:14:25.251    23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname
00:14:25.251   23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:25.251    23:55:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2979159
00:14:25.251   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:25.251   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:25.251   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2979159'
00:14:25.251  killing process with pid 2979159
00:14:25.251   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2979159
00:14:25.251   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2979159
00:14:25.510   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:14:25.510   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:14:25.510   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:14:25.510   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr
00:14:25.510   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save
00:14:25.510   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:14:25.510   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore
00:14:25.510   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:14:25.510   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns
00:14:25.510   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:14:25.510   23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:14:25.510    23:55:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:27.417   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:14:27.417  
00:14:27.417  real	0m27.548s
00:14:27.417  user	1m50.542s
00:14:27.417  sys	0m8.814s
00:14:27.417   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:27.417   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:14:27.417  ************************************
00:14:27.417  END TEST nvmf_fio_target
00:14:27.417  ************************************
00:14:27.676   23:55:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp
00:14:27.676   23:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:14:27.676   23:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:27.676   23:55:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:14:27.676  ************************************
00:14:27.676  START TEST nvmf_bdevio
00:14:27.676  ************************************
00:14:27.676   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp
00:14:27.676  * Looking for test storage...
00:14:27.676  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:27.676     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:27.676     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-:
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-:
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<'
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:27.676     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1
00:14:27.676     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1
00:14:27.676     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:27.676     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1
00:14:27.676     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2
00:14:27.676     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2
00:14:27.676     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:27.676     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:27.676  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:27.676  		--rc genhtml_branch_coverage=1
00:14:27.676  		--rc genhtml_function_coverage=1
00:14:27.676  		--rc genhtml_legend=1
00:14:27.676  		--rc geninfo_all_blocks=1
00:14:27.676  		--rc geninfo_unexecuted_blocks=1
00:14:27.676  		
00:14:27.676  		'
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:27.676  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:27.676  		--rc genhtml_branch_coverage=1
00:14:27.676  		--rc genhtml_function_coverage=1
00:14:27.676  		--rc genhtml_legend=1
00:14:27.676  		--rc geninfo_all_blocks=1
00:14:27.676  		--rc geninfo_unexecuted_blocks=1
00:14:27.676  		
00:14:27.676  		'
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:27.676  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:27.676  		--rc genhtml_branch_coverage=1
00:14:27.676  		--rc genhtml_function_coverage=1
00:14:27.676  		--rc genhtml_legend=1
00:14:27.676  		--rc geninfo_all_blocks=1
00:14:27.676  		--rc geninfo_unexecuted_blocks=1
00:14:27.676  		
00:14:27.676  		'
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:27.676  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:27.676  		--rc genhtml_branch_coverage=1
00:14:27.676  		--rc genhtml_function_coverage=1
00:14:27.676  		--rc genhtml_legend=1
00:14:27.676  		--rc geninfo_all_blocks=1
00:14:27.676  		--rc geninfo_unexecuted_blocks=1
00:14:27.676  		
00:14:27.676  		'
00:14:27.676   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:14:27.676     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s
00:14:27.676    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:14:27.677    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:14:27.677    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:14:27.677    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:14:27.677    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:14:27.677    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:14:27.677    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:14:27.677    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:14:27.677    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:14:27.677     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:14:28.086     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob
00:14:28.086     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:28.086     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:28.086     23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:28.086      23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:28.086      23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:28.086      23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:28.086      23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH
00:14:28.086      23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:14:28.086  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:14:28.086    23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable
00:14:28.086   23:55:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=()
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=()
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=()
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=()
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=()
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=()
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=()
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:14:33.379  Found 0000:af:00.0 (0x8086 - 0x159b)
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:14:33.379  Found 0000:af:00.1 (0x8086 - 0x159b)
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]]
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:14:33.379  Found net devices under 0000:af:00.0: cvl_0_0
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:14:33.379   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]]
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:14:33.380  Found net devices under 0000:af:00.1: cvl_0_1
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:14:33.380   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:14:33.639  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:14:33.639  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms
00:14:33.639  
00:14:33.639  --- 10.0.0.2 ping statistics ---
00:14:33.639  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:33.639  rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:14:33.639  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:14:33.639  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms
00:14:33.639  
00:14:33.639  --- 10.0.0.1 ping statistics ---
00:14:33.639  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:33.639  rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2986561
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2986561
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2986561 ']'
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:33.639  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:33.639   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:14:33.898  [2024-12-09 23:55:49.517808] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:14:33.898  [2024-12-09 23:55:49.517852] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:33.898  [2024-12-09 23:55:49.595574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:14:33.898  [2024-12-09 23:55:49.636688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:14:33.898  [2024-12-09 23:55:49.636726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:14:33.898  [2024-12-09 23:55:49.636733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:14:33.898  [2024-12-09 23:55:49.636741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:14:33.898  [2024-12-09 23:55:49.636746] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:14:33.898  [2024-12-09 23:55:49.638124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:14:33.898  [2024-12-09 23:55:49.638237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:14:33.898  [2024-12-09 23:55:49.638346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:14:33.898  [2024-12-09 23:55:49.638346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:14:33.898   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:33.898   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0
00:14:33.898   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:14:33.898   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable
00:14:33.898   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:14:34.157  [2024-12-09 23:55:49.783579] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:14:34.157  Malloc0
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:14:34.157  [2024-12-09 23:55:49.837230] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:34.157   23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62
00:14:34.157    23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json
00:14:34.157    23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=()
00:14:34.157    23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config
00:14:34.157    23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:14:34.157    23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:14:34.157  {
00:14:34.157    "params": {
00:14:34.157      "name": "Nvme$subsystem",
00:14:34.157      "trtype": "$TEST_TRANSPORT",
00:14:34.157      "traddr": "$NVMF_FIRST_TARGET_IP",
00:14:34.157      "adrfam": "ipv4",
00:14:34.157      "trsvcid": "$NVMF_PORT",
00:14:34.157      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:14:34.157      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:14:34.157      "hdgst": ${hdgst:-false},
00:14:34.157      "ddgst": ${ddgst:-false}
00:14:34.157    },
00:14:34.157    "method": "bdev_nvme_attach_controller"
00:14:34.157  }
00:14:34.157  EOF
00:14:34.157  )")
00:14:34.157     23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat
00:14:34.157    23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq .
00:14:34.157     23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=,
00:14:34.157     23:55:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:14:34.157    "params": {
00:14:34.157      "name": "Nvme1",
00:14:34.157      "trtype": "tcp",
00:14:34.157      "traddr": "10.0.0.2",
00:14:34.157      "adrfam": "ipv4",
00:14:34.157      "trsvcid": "4420",
00:14:34.157      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:14:34.157      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:14:34.157      "hdgst": false,
00:14:34.157      "ddgst": false
00:14:34.157    },
00:14:34.157    "method": "bdev_nvme_attach_controller"
00:14:34.157  }'
00:14:34.157  [2024-12-09 23:55:49.888019] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:14:34.158  [2024-12-09 23:55:49.888064] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2986721 ]
00:14:34.158  [2024-12-09 23:55:49.965116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:14:34.158  [2024-12-09 23:55:50.007746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:14:34.158  [2024-12-09 23:55:50.007855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:34.158  [2024-12-09 23:55:50.007856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:14:34.722  I/O targets:
00:14:34.722    Nvme1n1: 131072 blocks of 512 bytes (64 MiB)
00:14:34.722  
00:14:34.722  
00:14:34.722       CUnit - A unit testing framework for C - Version 2.1-3
00:14:34.722       http://cunit.sourceforge.net/
00:14:34.722  
00:14:34.722  
00:14:34.722  Suite: bdevio tests on: Nvme1n1
00:14:34.722    Test: blockdev write read block ...passed
00:14:34.722    Test: blockdev write zeroes read block ...passed
00:14:34.722    Test: blockdev write zeroes read no split ...passed
00:14:34.722    Test: blockdev write zeroes read split ...passed
00:14:34.722    Test: blockdev write zeroes read split partial ...passed
00:14:34.722    Test: blockdev reset ...[2024-12-09 23:55:50.449949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:14:34.722  [2024-12-09 23:55:50.450008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1406610 (9): Bad file descriptor
00:14:34.722  [2024-12-09 23:55:50.469151] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful.
00:14:34.722  passed
00:14:34.722    Test: blockdev write read 8 blocks ...passed
00:14:34.722    Test: blockdev write read size > 128k ...passed
00:14:34.722    Test: blockdev write read invalid size ...passed
00:14:34.722    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:14:34.722    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:14:34.722    Test: blockdev write read max offset ...passed
00:14:34.980    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:14:34.980    Test: blockdev writev readv 8 blocks ...passed
00:14:34.980    Test: blockdev writev readv 30 x 1block ...passed
00:14:34.980    Test: blockdev writev readv block ...passed
00:14:34.980    Test: blockdev writev readv size > 128k ...passed
00:14:34.980    Test: blockdev writev readv size > 128k in two iovs ...passed
00:14:34.980    Test: blockdev comparev and writev ...[2024-12-09 23:55:50.723931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:14:34.980  [2024-12-09 23:55:50.723959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:14:34.980  [2024-12-09 23:55:50.723973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:14:34.980  [2024-12-09 23:55:50.723981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:14:34.980  [2024-12-09 23:55:50.724236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:14:34.980  [2024-12-09 23:55:50.724247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:14:34.980  [2024-12-09 23:55:50.724258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:14:34.980  [2024-12-09 23:55:50.724265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:14:34.980  [2024-12-09 23:55:50.724503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:14:34.980  [2024-12-09 23:55:50.724513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:14:34.980  [2024-12-09 23:55:50.724524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:14:34.980  [2024-12-09 23:55:50.724532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:14:34.980  [2024-12-09 23:55:50.724770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:14:34.980  [2024-12-09 23:55:50.724779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:14:34.980  [2024-12-09 23:55:50.724791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:14:34.980  [2024-12-09 23:55:50.724798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:14:34.980  passed
00:14:34.980    Test: blockdev nvme passthru rw ...passed
00:14:34.980    Test: blockdev nvme passthru vendor specific ...[2024-12-09 23:55:50.806479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:14:34.980  [2024-12-09 23:55:50.806493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:14:34.980  [2024-12-09 23:55:50.806595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:14:34.980  [2024-12-09 23:55:50.806604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:14:34.980  [2024-12-09 23:55:50.806707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:14:34.980  [2024-12-09 23:55:50.806716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:14:34.980  [2024-12-09 23:55:50.806821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:14:34.980  [2024-12-09 23:55:50.806830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:14:34.980  passed
00:14:34.980    Test: blockdev nvme admin passthru ...passed
00:14:35.239    Test: blockdev copy ...passed
00:14:35.239  
00:14:35.239  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:14:35.239                suites      1      1    n/a      0        0
00:14:35.239                 tests     23     23     23      0        0
00:14:35.239               asserts    152    152    152      0      n/a
00:14:35.239  
00:14:35.239  Elapsed time =    1.069 seconds
00:14:35.239   23:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:14:35.239   23:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:35.239   23:55:50 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20}
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:14:35.239  rmmod nvme_tcp
00:14:35.239  rmmod nvme_fabrics
00:14:35.239  rmmod nvme_keyring
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2986561 ']'
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2986561
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2986561 ']'
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2986561
00:14:35.239    23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname
00:14:35.239   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:35.239    23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2986561
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']'
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2986561'
00:14:35.498  killing process with pid 2986561
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2986561
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2986561
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:14:35.498   23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:14:35.498    23:55:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:38.036   23:55:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:14:38.036  
00:14:38.036  real	0m10.033s
00:14:38.036  user	0m10.688s
00:14:38.036  sys	0m5.012s
00:14:38.036   23:55:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:38.036   23:55:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:14:38.036  ************************************
00:14:38.036  END TEST nvmf_bdevio
00:14:38.036  ************************************
00:14:38.036   23:55:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:14:38.036  
00:14:38.036  real	4m36.471s
00:14:38.036  user	10m32.744s
00:14:38.036  sys	1m39.411s
00:14:38.036   23:55:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:38.036   23:55:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x
00:14:38.036  ************************************
00:14:38.036  END TEST nvmf_target_core
00:14:38.036  ************************************
00:14:38.036   23:55:53 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp
00:14:38.036   23:55:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:14:38.036   23:55:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:38.036   23:55:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:14:38.036  ************************************
00:14:38.036  START TEST nvmf_target_extra
00:14:38.036  ************************************
00:14:38.036   23:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp
00:14:38.036  * Looking for test storage...
00:14:38.036  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:38.036     23:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version
00:14:38.036     23:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-:
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-:
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<'
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:38.036     23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1
00:14:38.036     23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1
00:14:38.036     23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:38.036     23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1
00:14:38.036     23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2
00:14:38.036     23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2
00:14:38.036     23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:38.036     23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2
00:14:38.036    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:38.037  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:38.037  		--rc genhtml_branch_coverage=1
00:14:38.037  		--rc genhtml_function_coverage=1
00:14:38.037  		--rc genhtml_legend=1
00:14:38.037  		--rc geninfo_all_blocks=1
00:14:38.037  		--rc geninfo_unexecuted_blocks=1
00:14:38.037  		
00:14:38.037  		'
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:38.037  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:38.037  		--rc genhtml_branch_coverage=1
00:14:38.037  		--rc genhtml_function_coverage=1
00:14:38.037  		--rc genhtml_legend=1
00:14:38.037  		--rc geninfo_all_blocks=1
00:14:38.037  		--rc geninfo_unexecuted_blocks=1
00:14:38.037  		
00:14:38.037  		'
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:38.037  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:38.037  		--rc genhtml_branch_coverage=1
00:14:38.037  		--rc genhtml_function_coverage=1
00:14:38.037  		--rc genhtml_legend=1
00:14:38.037  		--rc geninfo_all_blocks=1
00:14:38.037  		--rc geninfo_unexecuted_blocks=1
00:14:38.037  		
00:14:38.037  		'
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:38.037  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:38.037  		--rc genhtml_branch_coverage=1
00:14:38.037  		--rc genhtml_function_coverage=1
00:14:38.037  		--rc genhtml_legend=1
00:14:38.037  		--rc geninfo_all_blocks=1
00:14:38.037  		--rc geninfo_unexecuted_blocks=1
00:14:38.037  		
00:14:38.037  		'
00:14:38.037   23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:14:38.037     23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:14:38.037     23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:14:38.037     23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob
00:14:38.037     23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:38.037     23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:38.037     23:55:53 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:38.037      23:55:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:38.037      23:55:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:38.037      23:55:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:38.037      23:55:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH
00:14:38.037      23:55:53 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:14:38.037  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0
00:14:38.037   23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:14:38.037   23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@")
00:14:38.037   23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]]
00:14:38.037   23:55:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp
00:14:38.037   23:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:14:38.037   23:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:38.037   23:55:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:14:38.037  ************************************
00:14:38.037  START TEST nvmf_example
00:14:38.037  ************************************
00:14:38.037   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp
00:14:38.037  * Looking for test storage...
00:14:38.037  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:14:38.037    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:38.037     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version
00:14:38.037     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-:
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-:
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<'
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:38.298     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1
00:14:38.298     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1
00:14:38.298     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:38.298     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1
00:14:38.298     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2
00:14:38.298     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2
00:14:38.298     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:38.298     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:38.298  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:38.298  		--rc genhtml_branch_coverage=1
00:14:38.298  		--rc genhtml_function_coverage=1
00:14:38.298  		--rc genhtml_legend=1
00:14:38.298  		--rc geninfo_all_blocks=1
00:14:38.298  		--rc geninfo_unexecuted_blocks=1
00:14:38.298  		
00:14:38.298  		'
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:38.298  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:38.298  		--rc genhtml_branch_coverage=1
00:14:38.298  		--rc genhtml_function_coverage=1
00:14:38.298  		--rc genhtml_legend=1
00:14:38.298  		--rc geninfo_all_blocks=1
00:14:38.298  		--rc geninfo_unexecuted_blocks=1
00:14:38.298  		
00:14:38.298  		'
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:38.298  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:38.298  		--rc genhtml_branch_coverage=1
00:14:38.298  		--rc genhtml_function_coverage=1
00:14:38.298  		--rc genhtml_legend=1
00:14:38.298  		--rc geninfo_all_blocks=1
00:14:38.298  		--rc geninfo_unexecuted_blocks=1
00:14:38.298  		
00:14:38.298  		'
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:38.298  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:38.298  		--rc genhtml_branch_coverage=1
00:14:38.298  		--rc genhtml_function_coverage=1
00:14:38.298  		--rc genhtml_legend=1
00:14:38.298  		--rc geninfo_all_blocks=1
00:14:38.298  		--rc geninfo_unexecuted_blocks=1
00:14:38.298  		
00:14:38.298  		'
00:14:38.298   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:14:38.298     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:14:38.298    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:14:38.299     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:14:38.299     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob
00:14:38.299     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:38.299     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:38.299     23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:38.299      23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:38.299      23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:38.299      23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:38.299      23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH
00:14:38.299      23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:14:38.299  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf")
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']'
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000)
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}")
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:14:38.299    23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable
00:14:38.299   23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=()
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=()
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=()
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=()
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=()
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=()
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=()
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:14:44.874  Found 0000:af:00.0 (0x8086 - 0x159b)
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:14:44.874  Found 0000:af:00.1 (0x8086 - 0x159b)
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:14:44.874   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]]
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:14:44.875  Found net devices under 0000:af:00.0: cvl_0_0
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]]
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:14:44.875  Found net devices under 0000:af:00.1: cvl_0_1
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:14:44.875  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:14:44.875  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms
00:14:44.875  
00:14:44.875  --- 10.0.0.2 ping statistics ---
00:14:44.875  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:44.875  rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:14:44.875  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:14:44.875  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms
00:14:44.875  
00:14:44.875  --- 10.0.0.1 ping statistics ---
00:14:44.875  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:14:44.875  rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF'
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']'
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}")
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2990551
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2990551
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2990551 ']'
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:44.875  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:44.875   23:55:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:45.133    23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512
00:14:45.133    23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:45.133    23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:14:45.133    23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 '
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf
00:14:45.133   23:56:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1'
00:14:57.326  Initializing NVMe Controllers
00:14:57.326  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:14:57.326  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:14:57.326  Initialization complete. Launching workers.
00:14:57.326  ========================================================
00:14:57.326                                                                                                               Latency(us)
00:14:57.326  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:14:57.326  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:   18383.18      71.81    3480.81     566.79   16111.64
00:14:57.326  ========================================================
00:14:57.326  Total                                                                    :   18383.18      71.81    3480.81     566.79   16111.64
00:14:57.326  
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20}
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:14:57.326  rmmod nvme_tcp
00:14:57.326  rmmod nvme_fabrics
00:14:57.326  rmmod nvme_keyring
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2990551 ']'
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2990551
00:14:57.326   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2990551 ']'
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2990551
00:14:57.327    23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:57.327    23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2990551
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']'
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2990551'
00:14:57.327  killing process with pid 2990551
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2990551
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2990551
00:14:57.327  nvmf threads initialize successfully
00:14:57.327  bdev subsystem init successfully
00:14:57.327  created a nvmf target service
00:14:57.327  create targets's poll groups done
00:14:57.327  all subsystems of target started
00:14:57.327  nvmf target is running
00:14:57.327  all subsystems of target stopped
00:14:57.327  destroy targets's poll groups done
00:14:57.327  destroyed the nvmf target service
00:14:57.327  bdev subsystem finish successfully
00:14:57.327  nvmf threads destroy successfully
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:14:57.327   23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:14:57.327    23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:57.895   23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:14:57.895   23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test
00:14:57.895   23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable
00:14:57.895   23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:14:57.895  
00:14:57.895  real	0m19.819s
00:14:57.895  user	0m46.139s
00:14:57.895  sys	0m6.136s
00:14:57.895   23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:57.895   23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x
00:14:57.895  ************************************
00:14:57.895  END TEST nvmf_example
00:14:57.895  ************************************
00:14:57.895   23:56:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp
00:14:57.895   23:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:14:57.895   23:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:57.895   23:56:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:14:57.895  ************************************
00:14:57.895  START TEST nvmf_filesystem
00:14:57.895  ************************************
00:14:57.895   23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp
00:14:57.895  * Looking for test storage...
00:14:57.895  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:14:57.895     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:57.895      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version
00:14:57.895      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-:
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-:
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<'
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1
00:14:58.157     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:58.158      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1
00:14:58.158      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1
00:14:58.158      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:58.158      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1
00:14:58.158      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2
00:14:58.158      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2
00:14:58.158      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:58.158      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:58.158  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:58.158  		--rc genhtml_branch_coverage=1
00:14:58.158  		--rc genhtml_function_coverage=1
00:14:58.158  		--rc genhtml_legend=1
00:14:58.158  		--rc geninfo_all_blocks=1
00:14:58.158  		--rc geninfo_unexecuted_blocks=1
00:14:58.158  		
00:14:58.158  		'
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:58.158  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:58.158  		--rc genhtml_branch_coverage=1
00:14:58.158  		--rc genhtml_function_coverage=1
00:14:58.158  		--rc genhtml_legend=1
00:14:58.158  		--rc geninfo_all_blocks=1
00:14:58.158  		--rc geninfo_unexecuted_blocks=1
00:14:58.158  		
00:14:58.158  		'
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:58.158  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:58.158  		--rc genhtml_branch_coverage=1
00:14:58.158  		--rc genhtml_function_coverage=1
00:14:58.158  		--rc genhtml_legend=1
00:14:58.158  		--rc geninfo_all_blocks=1
00:14:58.158  		--rc geninfo_unexecuted_blocks=1
00:14:58.158  		
00:14:58.158  		'
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:58.158  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:58.158  		--rc genhtml_branch_coverage=1
00:14:58.158  		--rc genhtml_function_coverage=1
00:14:58.158  		--rc genhtml_legend=1
00:14:58.158  		--rc geninfo_all_blocks=1
00:14:58.158  		--rc geninfo_unexecuted_blocks=1
00:14:58.158  		
00:14:58.158  		'
00:14:58.158   23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh
00:14:58.158    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:14:58.158    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e
00:14:58.158    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob
00:14:58.158    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob
00:14:58.158    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit
00:14:58.158    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']'
00:14:58.158    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]]
00:14:58.158    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:14:58.158     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n
00:14:58.159    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh
00:14:58.159       23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh
00:14:58.159      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]]
00:14:58.159     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:14:58.159  #define SPDK_CONFIG_H
00:14:58.159  #define SPDK_CONFIG_AIO_FSDEV 1
00:14:58.159  #define SPDK_CONFIG_APPS 1
00:14:58.159  #define SPDK_CONFIG_ARCH native
00:14:58.159  #undef SPDK_CONFIG_ASAN
00:14:58.159  #undef SPDK_CONFIG_AVAHI
00:14:58.159  #undef SPDK_CONFIG_CET
00:14:58.159  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:14:58.159  #define SPDK_CONFIG_COVERAGE 1
00:14:58.159  #define SPDK_CONFIG_CROSS_PREFIX 
00:14:58.159  #undef SPDK_CONFIG_CRYPTO
00:14:58.159  #undef SPDK_CONFIG_CRYPTO_MLX5
00:14:58.159  #undef SPDK_CONFIG_CUSTOMOCF
00:14:58.159  #undef SPDK_CONFIG_DAOS
00:14:58.159  #define SPDK_CONFIG_DAOS_DIR 
00:14:58.159  #define SPDK_CONFIG_DEBUG 1
00:14:58.159  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:14:58.159  #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build
00:14:58.159  #define SPDK_CONFIG_DPDK_INC_DIR 
00:14:58.159  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:14:58.159  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:14:58.159  #undef SPDK_CONFIG_DPDK_UADK
00:14:58.159  #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk
00:14:58.159  #define SPDK_CONFIG_EXAMPLES 1
00:14:58.159  #undef SPDK_CONFIG_FC
00:14:58.159  #define SPDK_CONFIG_FC_PATH 
00:14:58.159  #define SPDK_CONFIG_FIO_PLUGIN 1
00:14:58.159  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:14:58.159  #define SPDK_CONFIG_FSDEV 1
00:14:58.159  #undef SPDK_CONFIG_FUSE
00:14:58.159  #undef SPDK_CONFIG_FUZZER
00:14:58.159  #define SPDK_CONFIG_FUZZER_LIB 
00:14:58.159  #undef SPDK_CONFIG_GOLANG
00:14:58.159  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:14:58.159  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:14:58.159  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:14:58.159  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:14:58.159  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:14:58.159  #undef SPDK_CONFIG_HAVE_LIBBSD
00:14:58.159  #undef SPDK_CONFIG_HAVE_LZ4
00:14:58.159  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:14:58.159  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:14:58.159  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:14:58.159  #define SPDK_CONFIG_IDXD 1
00:14:58.159  #define SPDK_CONFIG_IDXD_KERNEL 1
00:14:58.159  #undef SPDK_CONFIG_IPSEC_MB
00:14:58.159  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:14:58.159  #define SPDK_CONFIG_ISAL 1
00:14:58.159  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:14:58.159  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:14:58.159  #define SPDK_CONFIG_LIBDIR 
00:14:58.159  #undef SPDK_CONFIG_LTO
00:14:58.159  #define SPDK_CONFIG_MAX_LCORES 128
00:14:58.159  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:14:58.159  #define SPDK_CONFIG_NVME_CUSE 1
00:14:58.159  #undef SPDK_CONFIG_OCF
00:14:58.159  #define SPDK_CONFIG_OCF_PATH 
00:14:58.159  #define SPDK_CONFIG_OPENSSL_PATH 
00:14:58.160  #undef SPDK_CONFIG_PGO_CAPTURE
00:14:58.160  #define SPDK_CONFIG_PGO_DIR 
00:14:58.160  #undef SPDK_CONFIG_PGO_USE
00:14:58.160  #define SPDK_CONFIG_PREFIX /usr/local
00:14:58.160  #undef SPDK_CONFIG_RAID5F
00:14:58.160  #undef SPDK_CONFIG_RBD
00:14:58.160  #define SPDK_CONFIG_RDMA 1
00:14:58.160  #define SPDK_CONFIG_RDMA_PROV verbs
00:14:58.160  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:14:58.160  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:14:58.160  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:14:58.160  #define SPDK_CONFIG_SHARED 1
00:14:58.160  #undef SPDK_CONFIG_SMA
00:14:58.160  #define SPDK_CONFIG_TESTS 1
00:14:58.160  #undef SPDK_CONFIG_TSAN
00:14:58.160  #define SPDK_CONFIG_UBLK 1
00:14:58.160  #define SPDK_CONFIG_UBSAN 1
00:14:58.160  #undef SPDK_CONFIG_UNIT_TESTS
00:14:58.160  #undef SPDK_CONFIG_URING
00:14:58.160  #define SPDK_CONFIG_URING_PATH 
00:14:58.160  #undef SPDK_CONFIG_URING_ZNS
00:14:58.160  #undef SPDK_CONFIG_USDT
00:14:58.160  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:14:58.160  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:14:58.160  #define SPDK_CONFIG_VFIO_USER 1
00:14:58.160  #define SPDK_CONFIG_VFIO_USER_DIR 
00:14:58.160  #define SPDK_CONFIG_VHOST 1
00:14:58.160  #define SPDK_CONFIG_VIRTIO 1
00:14:58.160  #undef SPDK_CONFIG_VTUNE
00:14:58.160  #define SPDK_CONFIG_VTUNE_DIR 
00:14:58.160  #define SPDK_CONFIG_WERROR 1
00:14:58.160  #define SPDK_CONFIG_WPDK_DIR 
00:14:58.160  #undef SPDK_CONFIG_XNVME
00:14:58.160  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:58.160      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:58.160      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:58.160      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:58.160      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH
00:14:58.160      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common
00:14:58.160       23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common
00:14:58.160      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm
00:14:58.160      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power
00:14:58.160      23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=()
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]=
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E'
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]]
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]]
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]]
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]]
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp)
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm)
00:14:58.160     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]]
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # :
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL
00:14:58.160    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # :
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # :
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # :
00:14:58.161    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']'
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV=
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]]
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]]
00:14:58.162    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]=
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt=
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']'
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind=
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind=
00:14:58.163     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']'
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=()
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE=
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@"
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2992890 ]]
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2992890
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]]
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates
00:14:58.163     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.9wgYXh
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]]
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]]
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.9wgYXh/tests/target /tmp/spdk.9wgYXh
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:58.163     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T
00:14:58.163     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=89315254272
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100837203968
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11521949696
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=50407235584
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20144435200
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20167442432
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23007232
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=49344450560
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=50418601984
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074151424
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:14:58.163    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=10083704832
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=10083717120
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n'
00:14:58.164  * Looking for test storage...
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}"
00:14:58.164     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:14:58.164     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}'
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=89315254272
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size ))
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size ))
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]]
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]]
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]]
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=13736542208
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 ))
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:14:58.164  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]]
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]]
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x
00:14:58.164    23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:14:58.164     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version
00:14:58.164     23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-:
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-:
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<'
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:14:58.424  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:58.424  		--rc genhtml_branch_coverage=1
00:14:58.424  		--rc genhtml_function_coverage=1
00:14:58.424  		--rc genhtml_legend=1
00:14:58.424  		--rc geninfo_all_blocks=1
00:14:58.424  		--rc geninfo_unexecuted_blocks=1
00:14:58.424  		
00:14:58.424  		'
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:14:58.424  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:58.424  		--rc genhtml_branch_coverage=1
00:14:58.424  		--rc genhtml_function_coverage=1
00:14:58.424  		--rc genhtml_legend=1
00:14:58.424  		--rc geninfo_all_blocks=1
00:14:58.424  		--rc geninfo_unexecuted_blocks=1
00:14:58.424  		
00:14:58.424  		'
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:14:58.424  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:58.424  		--rc genhtml_branch_coverage=1
00:14:58.424  		--rc genhtml_function_coverage=1
00:14:58.424  		--rc genhtml_legend=1
00:14:58.424  		--rc geninfo_all_blocks=1
00:14:58.424  		--rc geninfo_unexecuted_blocks=1
00:14:58.424  		
00:14:58.424  		'
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:14:58.424  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:58.424  		--rc genhtml_branch_coverage=1
00:14:58.424  		--rc genhtml_function_coverage=1
00:14:58.424  		--rc genhtml_legend=1
00:14:58.424  		--rc geninfo_all_blocks=1
00:14:58.424  		--rc geninfo_unexecuted_blocks=1
00:14:58.424  		
00:14:58.424  		'
00:14:58.424   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:14:58.424    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:58.424     23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:58.424      23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:58.425      23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:58.425      23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:58.425      23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH
00:14:58.425      23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:14:58.425    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0
00:14:58.425    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:14:58.425    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:14:58.425    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:14:58.425    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:14:58.425    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:14:58.425    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:14:58.425  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:14:58.425    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:14:58.425    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:14:58.425    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:14:58.425    23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable
00:14:58.425   23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=()
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=()
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=()
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=()
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=()
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=()
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=()
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:15:04.997  Found 0000:af:00.0 (0x8086 - 0x159b)
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:15:04.997   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:15:04.997  Found 0000:af:00.1 (0x8086 - 0x159b)
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]]
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:15:04.998  Found net devices under 0000:af:00.0: cvl_0_0
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]]
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:15:04.998  Found net devices under 0000:af:00.1: cvl_0_1
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:15:04.998  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:15:04.998  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms
00:15:04.998  
00:15:04.998  --- 10.0.0.2 ping statistics ---
00:15:04.998  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:04.998  rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms
00:15:04.998   23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:15:04.998  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:15:04.998  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms
00:15:04.998  
00:15:04.998  --- 10.0.0.1 ping statistics ---
00:15:04.998  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:04.998  rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:15:04.998  ************************************
00:15:04.998  START TEST nvmf_filesystem_no_in_capsule
00:15:04.998  ************************************
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2996094
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2996094
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:15:04.998   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2996094 ']'
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:04.999  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:04.999  [2024-12-09 23:56:20.138115] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:15:04.999  [2024-12-09 23:56:20.138156] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:04.999  [2024-12-09 23:56:20.216564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:15:04.999  [2024-12-09 23:56:20.257267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:15:04.999  [2024-12-09 23:56:20.257303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:15:04.999  [2024-12-09 23:56:20.257310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:15:04.999  [2024-12-09 23:56:20.257316] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:15:04.999  [2024-12-09 23:56:20.257321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:15:04.999  [2024-12-09 23:56:20.258754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:15:04.999  [2024-12-09 23:56:20.258865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:15:04.999  [2024-12-09 23:56:20.258971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:04.999  [2024-12-09 23:56:20.258973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:04.999  [2024-12-09 23:56:20.392426] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:04.999  Malloc1
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:04.999  [2024-12-09 23:56:20.552304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:15:04.999   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:04.999    23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1
00:15:04.999    23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1
00:15:04.999    23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info
00:15:04.999    23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs
00:15:04.999    23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb
00:15:04.999     23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1
00:15:04.999     23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:04.999     23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:04.999     23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:04.999    23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[
00:15:04.999  {
00:15:04.999  "name": "Malloc1",
00:15:04.999  "aliases": [
00:15:04.999  "5a622812-c867-4694-9ee5-7e5ca3123247"
00:15:04.999  ],
00:15:04.999  "product_name": "Malloc disk",
00:15:04.999  "block_size": 512,
00:15:04.999  "num_blocks": 1048576,
00:15:04.999  "uuid": "5a622812-c867-4694-9ee5-7e5ca3123247",
00:15:04.999  "assigned_rate_limits": {
00:15:04.999  "rw_ios_per_sec": 0,
00:15:04.999  "rw_mbytes_per_sec": 0,
00:15:04.999  "r_mbytes_per_sec": 0,
00:15:04.999  "w_mbytes_per_sec": 0
00:15:04.999  },
00:15:04.999  "claimed": true,
00:15:04.999  "claim_type": "exclusive_write",
00:15:04.999  "zoned": false,
00:15:04.999  "supported_io_types": {
00:15:04.999  "read": true,
00:15:04.999  "write": true,
00:15:04.999  "unmap": true,
00:15:04.999  "flush": true,
00:15:04.999  "reset": true,
00:15:04.999  "nvme_admin": false,
00:15:04.999  "nvme_io": false,
00:15:04.999  "nvme_io_md": false,
00:15:04.999  "write_zeroes": true,
00:15:04.999  "zcopy": true,
00:15:04.999  "get_zone_info": false,
00:15:04.999  "zone_management": false,
00:15:04.999  "zone_append": false,
00:15:04.999  "compare": false,
00:15:04.999  "compare_and_write": false,
00:15:04.999  "abort": true,
00:15:04.999  "seek_hole": false,
00:15:04.999  "seek_data": false,
00:15:04.999  "copy": true,
00:15:04.999  "nvme_iov_md": false
00:15:04.999  },
00:15:04.999  "memory_domains": [
00:15:04.999  {
00:15:04.999  "dma_device_id": "system",
00:15:04.999  "dma_device_type": 1
00:15:04.999  },
00:15:04.999  {
00:15:04.999  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:04.999  "dma_device_type": 2
00:15:04.999  }
00:15:04.999  ],
00:15:04.999  "driver_specific": {}
00:15:04.999  }
00:15:04.999  ]'
00:15:04.999     23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:15:04.999    23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512
00:15:04.999     23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:15:05.000    23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576
00:15:05.000    23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512
00:15:05.000    23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512
00:15:05.000   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912
00:15:05.000   23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:15:06.373   23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME
00:15:06.373   23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0
00:15:06.373   23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:15:06.373   23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:15:06.373   23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2
00:15:08.270   23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:15:08.270    23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:15:08.270    23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:15:08.270   23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:15:08.270   23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:15:08.270   23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0
00:15:08.270    23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL
00:15:08.270    23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)'
00:15:08.270   23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1
00:15:08.270    23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1
00:15:08.270    23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1
00:15:08.270    23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:15:08.270    23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912
00:15:08.270   23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912
00:15:08.270   23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device
00:15:08.270   23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size ))
00:15:08.270   23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100%
00:15:08.528   23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe
00:15:08.786   23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']'
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:10.159  ************************************
00:15:10.159  START TEST filesystem_ext4
00:15:10.159  ************************************
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']'
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F
00:15:10.159   23:56:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1
00:15:10.159  mke2fs 1.47.0 (5-Feb-2023)
00:15:10.159  Discarding device blocks:      0/522240             done                            
00:15:10.159  Creating filesystem with 522240 1k blocks and 130560 inodes
00:15:10.159  Filesystem UUID: 184b9647-c249-4346-8106-a0aabf8a495d
00:15:10.159  Superblock backups stored on blocks: 
00:15:10.159  	8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
00:15:10.159  
00:15:10.159  Allocating group tables:  0/64     done                            
00:15:10.159  Writing inode tables:  0/64     done                            
00:15:12.685  Creating journal (8192 blocks): done
00:15:14.316  Writing superblocks and filesystem accounting information:  0/64 8/64     done
00:15:14.316  
00:15:14.316   23:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0
00:15:14.316   23:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2996094
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:15:20.869  
00:15:20.869  real	0m10.574s
00:15:20.869  user	0m0.035s
00:15:20.869  sys	0m0.068s
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x
00:15:20.869  ************************************
00:15:20.869  END TEST filesystem_ext4
00:15:20.869  ************************************
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:20.869  ************************************
00:15:20.869  START TEST filesystem_btrfs
00:15:20.869  ************************************
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']'
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1
00:15:20.869  btrfs-progs v6.8.1
00:15:20.869  See https://btrfs.readthedocs.io for more information.
00:15:20.869  
00:15:20.869  Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ...
00:15:20.869  NOTE: several default settings have changed in version 5.15, please make sure
00:15:20.869        this does not affect your deployments:
00:15:20.869        - DUP for metadata (-m dup)
00:15:20.869        - enabled no-holes (-O no-holes)
00:15:20.869        - enabled free-space-tree (-R free-space-tree)
00:15:20.869  
00:15:20.869  Label:              (null)
00:15:20.869  UUID:               b710812c-565f-4822-8d2c-1f4c843d5e21
00:15:20.869  Node size:          16384
00:15:20.869  Sector size:        4096	(CPU page size: 4096)
00:15:20.869  Filesystem size:    510.00MiB
00:15:20.869  Block group profiles:
00:15:20.869    Data:             single            8.00MiB
00:15:20.869    Metadata:         DUP              32.00MiB
00:15:20.869    System:           DUP               8.00MiB
00:15:20.869  SSD detected:       yes
00:15:20.869  Zoned device:       no
00:15:20.869  Features:           extref, skinny-metadata, no-holes, free-space-tree
00:15:20.869  Checksum:           crc32c
00:15:20.869  Number of devices:  1
00:15:20.869  Devices:
00:15:20.869     ID        SIZE  PATH          
00:15:20.869      1   510.00MiB  /dev/nvme0n1p1
00:15:20.869  
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2996094
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:15:20.869   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:15:20.870   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:15:20.870   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:15:20.870  
00:15:20.870  real	0m0.420s
00:15:20.870  user	0m0.026s
00:15:20.870  sys	0m0.114s
00:15:20.870   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:20.870   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x
00:15:20.870  ************************************
00:15:20.870  END TEST filesystem_btrfs
00:15:20.870  ************************************
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:21.128  ************************************
00:15:21.128  START TEST filesystem_xfs
00:15:21.128  ************************************
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']'
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f
00:15:21.128   23:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1
00:15:21.128  meta-data=/dev/nvme0n1p1         isize=512    agcount=4, agsize=32640 blks
00:15:21.128           =                       sectsz=512   attr=2, projid32bit=1
00:15:21.128           =                       crc=1        finobt=1, sparse=1, rmapbt=0
00:15:21.128           =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
00:15:21.128  data     =                       bsize=4096   blocks=130560, imaxpct=25
00:15:21.128           =                       sunit=0      swidth=0 blks
00:15:21.128  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
00:15:21.128  log      =internal log           bsize=4096   blocks=16384, version=2
00:15:21.128           =                       sectsz=512   sunit=0 blks, lazy-count=1
00:15:21.128  realtime =none                   extsz=4096   blocks=0, rtextents=0
00:15:22.061  Discarding blocks...Done.
00:15:22.061   23:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0
00:15:22.061   23:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2996094
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:15:23.960  
00:15:23.960  real	0m2.695s
00:15:23.960  user	0m0.029s
00:15:23.960  sys	0m0.067s
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x
00:15:23.960  ************************************
00:15:23.960  END TEST filesystem_xfs
00:15:23.960  ************************************
00:15:23.960   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1
00:15:24.218   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync
00:15:24.218   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:15:24.218  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:24.218   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:15:24.218   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2996094
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2996094 ']'
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2996094
00:15:24.219    23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname
00:15:24.219   23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:24.219    23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2996094
00:15:24.219   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:24.219   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:24.219   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2996094'
00:15:24.219  killing process with pid 2996094
00:15:24.219   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2996094
00:15:24.219   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2996094
00:15:24.477   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid=
00:15:24.477  
00:15:24.477  real	0m20.245s
00:15:24.477  user	1m19.778s
00:15:24.477  sys	0m1.414s
00:15:24.477   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:24.477   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:24.477  ************************************
00:15:24.477  END TEST nvmf_filesystem_no_in_capsule
00:15:24.477  ************************************
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:15:24.736  ************************************
00:15:24.736  START TEST nvmf_filesystem_in_capsule
00:15:24.736  ************************************
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2999494
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2999494
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2999494 ']'
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:24.736  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:24.736   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:24.736  [2024-12-09 23:56:40.459359] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:15:24.736  [2024-12-09 23:56:40.459402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:24.736  [2024-12-09 23:56:40.539655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:15:24.736  [2024-12-09 23:56:40.580313] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:15:24.736  [2024-12-09 23:56:40.580348] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:15:24.736  [2024-12-09 23:56:40.580355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:15:24.736  [2024-12-09 23:56:40.580361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:15:24.736  [2024-12-09 23:56:40.580366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:15:24.736  [2024-12-09 23:56:40.581787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:15:24.736  [2024-12-09 23:56:40.581897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:15:24.737  [2024-12-09 23:56:40.582003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:24.737  [2024-12-09 23:56:40.582005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:24.995  [2024-12-09 23:56:40.715734] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:24.995  Malloc1
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:24.995   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:25.254   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:25.254   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:15:25.254   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:25.254   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:25.254  [2024-12-09 23:56:40.870336] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:15:25.254   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:25.254    23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1
00:15:25.254    23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1
00:15:25.254    23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info
00:15:25.254    23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs
00:15:25.254    23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb
00:15:25.254     23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1
00:15:25.254     23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:25.254     23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:25.254     23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:25.254    23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[
00:15:25.254  {
00:15:25.254  "name": "Malloc1",
00:15:25.254  "aliases": [
00:15:25.254  "00f85fdf-f34d-4b04-a8b4-9c3d25f35af5"
00:15:25.254  ],
00:15:25.254  "product_name": "Malloc disk",
00:15:25.254  "block_size": 512,
00:15:25.254  "num_blocks": 1048576,
00:15:25.254  "uuid": "00f85fdf-f34d-4b04-a8b4-9c3d25f35af5",
00:15:25.254  "assigned_rate_limits": {
00:15:25.254  "rw_ios_per_sec": 0,
00:15:25.254  "rw_mbytes_per_sec": 0,
00:15:25.254  "r_mbytes_per_sec": 0,
00:15:25.254  "w_mbytes_per_sec": 0
00:15:25.254  },
00:15:25.254  "claimed": true,
00:15:25.254  "claim_type": "exclusive_write",
00:15:25.254  "zoned": false,
00:15:25.254  "supported_io_types": {
00:15:25.254  "read": true,
00:15:25.254  "write": true,
00:15:25.254  "unmap": true,
00:15:25.254  "flush": true,
00:15:25.254  "reset": true,
00:15:25.254  "nvme_admin": false,
00:15:25.254  "nvme_io": false,
00:15:25.254  "nvme_io_md": false,
00:15:25.254  "write_zeroes": true,
00:15:25.254  "zcopy": true,
00:15:25.254  "get_zone_info": false,
00:15:25.254  "zone_management": false,
00:15:25.254  "zone_append": false,
00:15:25.254  "compare": false,
00:15:25.254  "compare_and_write": false,
00:15:25.254  "abort": true,
00:15:25.254  "seek_hole": false,
00:15:25.254  "seek_data": false,
00:15:25.254  "copy": true,
00:15:25.254  "nvme_iov_md": false
00:15:25.254  },
00:15:25.254  "memory_domains": [
00:15:25.254  {
00:15:25.254  "dma_device_id": "system",
00:15:25.254  "dma_device_type": 1
00:15:25.254  },
00:15:25.254  {
00:15:25.254  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:15:25.254  "dma_device_type": 2
00:15:25.254  }
00:15:25.254  ],
00:15:25.254  "driver_specific": {}
00:15:25.254  }
00:15:25.254  ]'
00:15:25.254     23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size'
00:15:25.254    23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512
00:15:25.254     23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks'
00:15:25.254    23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576
00:15:25.254    23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512
00:15:25.254    23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512
00:15:25.254   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912
00:15:25.254   23:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:15:26.628   23:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME
00:15:26.628   23:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0
00:15:26.628   23:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:15:26.628   23:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:15:26.628   23:56:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2
00:15:28.524   23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:15:28.524    23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:15:28.524    23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:15:28.524   23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:15:28.524   23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:15:28.524   23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0
00:15:28.524    23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL
00:15:28.524    23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)'
00:15:28.524   23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1
00:15:28.524    23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1
00:15:28.525    23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1
00:15:28.525    23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]]
00:15:28.525    23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912
00:15:28.525   23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912
00:15:28.525   23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device
00:15:28.525   23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size ))
00:15:28.525   23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100%
00:15:28.782   23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe
00:15:28.782   23:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1
00:15:29.714   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']'
00:15:29.715   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1
00:15:29.715   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:15:29.715   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:29.715   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:29.973  ************************************
00:15:29.973  START TEST filesystem_in_capsule_ext4
00:15:29.973  ************************************
00:15:29.973   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1
00:15:29.973   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4
00:15:29.973   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:15:29.973   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1
00:15:29.973   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4
00:15:29.973   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:15:29.973   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0
00:15:29.973   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force
00:15:29.973   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']'
00:15:29.973   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F
00:15:29.973   23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1
00:15:29.973  mke2fs 1.47.0 (5-Feb-2023)
00:15:29.973  Discarding device blocks:      0/522240             done                            
00:15:29.973  Creating filesystem with 522240 1k blocks and 130560 inodes
00:15:29.973  Filesystem UUID: a5e73b11-8366-4345-90d9-02ca3125aad6
00:15:29.973  Superblock backups stored on blocks: 
00:15:29.973  	8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409
00:15:29.973  
00:15:29.973  Allocating group tables:  0/64     done                            
00:15:29.973  Writing inode tables:  0/64     done                            
00:15:30.230  Creating journal (8192 blocks): done
00:15:31.310  Writing superblocks and filesystem accounting information:  0/64     done
00:15:31.310  
00:15:31.310   23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0
00:15:31.310   23:56:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:15:37.903   23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:15:37.903   23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync
00:15:37.903   23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:15:37.903   23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync
00:15:37.903   23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0
00:15:37.903   23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2999494
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:15:37.903  
00:15:37.903  real	0m7.432s
00:15:37.903  user	0m0.030s
00:15:37.903  sys	0m0.069s
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x
00:15:37.903  ************************************
00:15:37.903  END TEST filesystem_in_capsule_ext4
00:15:37.903  ************************************
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:37.903  ************************************
00:15:37.903  START TEST filesystem_in_capsule_btrfs
00:15:37.903  ************************************
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:15:37.903   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0
00:15:37.904   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force
00:15:37.904   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']'
00:15:37.904   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f
00:15:37.904   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1
00:15:37.904  btrfs-progs v6.8.1
00:15:37.904  See https://btrfs.readthedocs.io for more information.
00:15:37.904  
00:15:37.904  Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ...
00:15:37.904  NOTE: several default settings have changed in version 5.15, please make sure
00:15:37.904        this does not affect your deployments:
00:15:37.904        - DUP for metadata (-m dup)
00:15:37.904        - enabled no-holes (-O no-holes)
00:15:37.904        - enabled free-space-tree (-R free-space-tree)
00:15:37.904  
00:15:37.904  Label:              (null)
00:15:37.904  UUID:               be995609-ae06-4ed3-a897-f71ab5cffcff
00:15:37.904  Node size:          16384
00:15:37.904  Sector size:        4096	(CPU page size: 4096)
00:15:37.904  Filesystem size:    510.00MiB
00:15:37.904  Block group profiles:
00:15:37.904    Data:             single            8.00MiB
00:15:37.904    Metadata:         DUP              32.00MiB
00:15:37.904    System:           DUP               8.00MiB
00:15:37.904  SSD detected:       yes
00:15:37.904  Zoned device:       no
00:15:37.904  Features:           extref, skinny-metadata, no-holes, free-space-tree
00:15:37.904  Checksum:           crc32c
00:15:37.904  Number of devices:  1
00:15:37.904  Devices:
00:15:37.904     ID        SIZE  PATH          
00:15:37.904      1   510.00MiB  /dev/nvme0n1p1
00:15:37.904  
00:15:37.904   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0
00:15:37.904   23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2999494
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:15:38.469  
00:15:38.469  real	0m1.200s
00:15:38.469  user	0m0.037s
00:15:38.469  sys	0m0.108s
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x
00:15:38.469  ************************************
00:15:38.469  END TEST filesystem_in_capsule_btrfs
00:15:38.469  ************************************
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1
00:15:38.469   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:15:38.727   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:38.727   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:38.727  ************************************
00:15:38.727  START TEST filesystem_in_capsule_xfs
00:15:38.727  ************************************
00:15:38.727   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1
00:15:38.727   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs
00:15:38.727   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1
00:15:38.727   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1
00:15:38.727   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs
00:15:38.727   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1
00:15:38.727   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0
00:15:38.727   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force
00:15:38.727   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']'
00:15:38.727   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f
00:15:38.727   23:56:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1
00:15:38.727  meta-data=/dev/nvme0n1p1         isize=512    agcount=4, agsize=32640 blks
00:15:38.727           =                       sectsz=512   attr=2, projid32bit=1
00:15:38.727           =                       crc=1        finobt=1, sparse=1, rmapbt=0
00:15:38.727           =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
00:15:38.727  data     =                       bsize=4096   blocks=130560, imaxpct=25
00:15:38.727           =                       sunit=0      swidth=0 blks
00:15:38.727  naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
00:15:38.727  log      =internal log           bsize=4096   blocks=16384, version=2
00:15:38.727           =                       sectsz=512   sunit=0 blks, lazy-count=1
00:15:38.727  realtime =none                   extsz=4096   blocks=0, rtextents=0
00:15:39.660  Discarding blocks...Done.
00:15:39.660   23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0
00:15:39.660   23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2999494
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1
00:15:41.556  
00:15:41.556  real	0m2.839s
00:15:41.556  user	0m0.025s
00:15:41.556  sys	0m0.075s
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x
00:15:41.556  ************************************
00:15:41.556  END TEST filesystem_in_capsule_xfs
00:15:41.556  ************************************
00:15:41.556   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:15:41.814  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:41.814   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:15:41.815   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2999494
00:15:41.815   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2999494 ']'
00:15:41.815   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2999494
00:15:41.815    23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname
00:15:41.815   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:41.815    23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2999494
00:15:42.072   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:42.072   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:42.072   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2999494'
00:15:42.072  killing process with pid 2999494
00:15:42.072   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2999494
00:15:42.072   23:56:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2999494
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid=
00:15:42.332  
00:15:42.332  real	0m17.608s
00:15:42.332  user	1m9.304s
00:15:42.332  sys	0m1.378s
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x
00:15:42.332  ************************************
00:15:42.332  END TEST nvmf_filesystem_in_capsule
00:15:42.332  ************************************
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20}
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:15:42.332  rmmod nvme_tcp
00:15:42.332  rmmod nvme_fabrics
00:15:42.332  rmmod nvme_keyring
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:15:42.332   23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:15:42.332    23:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:15:44.869   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:15:44.869  
00:15:44.869  real	0m46.555s
00:15:44.869  user	2m31.202s
00:15:44.870  sys	0m7.409s
00:15:44.870   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:44.870   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x
00:15:44.870  ************************************
00:15:44.870  END TEST nvmf_filesystem
00:15:44.870  ************************************
00:15:44.870   23:57:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp
00:15:44.870   23:57:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:15:44.870   23:57:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:44.870   23:57:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:15:44.870  ************************************
00:15:44.870  START TEST nvmf_target_discovery
00:15:44.870  ************************************
00:15:44.870   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp
00:15:44.870  * Looking for test storage...
00:15:44.870  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:15:44.870  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:44.870  		--rc genhtml_branch_coverage=1
00:15:44.870  		--rc genhtml_function_coverage=1
00:15:44.870  		--rc genhtml_legend=1
00:15:44.870  		--rc geninfo_all_blocks=1
00:15:44.870  		--rc geninfo_unexecuted_blocks=1
00:15:44.870  		
00:15:44.870  		'
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:15:44.870  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:44.870  		--rc genhtml_branch_coverage=1
00:15:44.870  		--rc genhtml_function_coverage=1
00:15:44.870  		--rc genhtml_legend=1
00:15:44.870  		--rc geninfo_all_blocks=1
00:15:44.870  		--rc geninfo_unexecuted_blocks=1
00:15:44.870  		
00:15:44.870  		'
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:15:44.870  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:44.870  		--rc genhtml_branch_coverage=1
00:15:44.870  		--rc genhtml_function_coverage=1
00:15:44.870  		--rc genhtml_legend=1
00:15:44.870  		--rc geninfo_all_blocks=1
00:15:44.870  		--rc geninfo_unexecuted_blocks=1
00:15:44.870  		
00:15:44.870  		'
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:15:44.870  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:44.870  		--rc genhtml_branch_coverage=1
00:15:44.870  		--rc genhtml_function_coverage=1
00:15:44.870  		--rc genhtml_legend=1
00:15:44.870  		--rc geninfo_all_blocks=1
00:15:44.870  		--rc geninfo_unexecuted_blocks=1
00:15:44.870  		
00:15:44.870  		'
00:15:44.870   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:15:44.870    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob
00:15:44.870     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:15:44.871     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:15:44.871     23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:15:44.871      23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:44.871      23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:44.871      23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:44.871      23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH
00:15:44.871      23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:44.871    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0
00:15:44.871    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:15:44.871    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:15:44.871    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:15:44.871    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:15:44.871    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:15:44.871    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:15:44.871  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:15:44.871    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:15:44.871    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:15:44.871    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:15:44.871    23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable
00:15:44.871   23:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=()
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=()
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=()
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=()
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=()
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=()
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=()
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:15:51.443   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:15:51.443  Found 0000:af:00.0 (0x8086 - 0x159b)
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:15:51.444  Found 0000:af:00.1 (0x8086 - 0x159b)
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:15:51.444  Found net devices under 0000:af:00.0: cvl_0_0
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:15:51.444  Found net devices under 0000:af:00.1: cvl_0_1
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:15:51.444   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:15:51.445  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:15:51.445  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms
00:15:51.445  
00:15:51.445  --- 10.0.0.2 ping statistics ---
00:15:51.445  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:51.445  rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:15:51.445  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:15:51.445  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms
00:15:51.445  
00:15:51.445  --- 10.0.0.1 ping statistics ---
00:15:51.445  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:15:51.445  rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=3006335
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 3006335
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3006335 ']'
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:51.445  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.445  [2024-12-09 23:57:06.428725] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:15:51.445  [2024-12-09 23:57:06.428771] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:15:51.445  [2024-12-09 23:57:06.508683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:15:51.445  [2024-12-09 23:57:06.549763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:15:51.445  [2024-12-09 23:57:06.549801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:15:51.445  [2024-12-09 23:57:06.549809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:15:51.445  [2024-12-09 23:57:06.549816] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:15:51.445  [2024-12-09 23:57:06.549821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:15:51.445  [2024-12-09 23:57:06.551280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:15:51.445  [2024-12-09 23:57:06.551391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:15:51.445  [2024-12-09 23:57:06.551424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:51.445  [2024-12-09 23:57:06.551426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.445  [2024-12-09 23:57:06.688704] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.445    23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.445  Null1
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.445  [2024-12-09 23:57:06.748312] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.445   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.445  Null2
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.446  Null3
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4)
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.446  Null4
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.446   23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420
00:15:51.446  
00:15:51.446  Discovery Log Number of Records 6, Generation counter 6
00:15:51.446  =====Discovery Log Entry 0======
00:15:51.446  trtype:  tcp
00:15:51.446  adrfam:  ipv4
00:15:51.446  subtype: current discovery subsystem
00:15:51.446  treq:    not required
00:15:51.446  portid:  0
00:15:51.446  trsvcid: 4420
00:15:51.446  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:15:51.446  traddr:  10.0.0.2
00:15:51.446  eflags:  explicit discovery connections, duplicate discovery information
00:15:51.446  sectype: none
00:15:51.446  =====Discovery Log Entry 1======
00:15:51.446  trtype:  tcp
00:15:51.446  adrfam:  ipv4
00:15:51.446  subtype: nvme subsystem
00:15:51.446  treq:    not required
00:15:51.446  portid:  0
00:15:51.446  trsvcid: 4420
00:15:51.446  subnqn:  nqn.2016-06.io.spdk:cnode1
00:15:51.446  traddr:  10.0.0.2
00:15:51.446  eflags:  none
00:15:51.446  sectype: none
00:15:51.446  =====Discovery Log Entry 2======
00:15:51.446  trtype:  tcp
00:15:51.446  adrfam:  ipv4
00:15:51.446  subtype: nvme subsystem
00:15:51.446  treq:    not required
00:15:51.446  portid:  0
00:15:51.446  trsvcid: 4420
00:15:51.446  subnqn:  nqn.2016-06.io.spdk:cnode2
00:15:51.446  traddr:  10.0.0.2
00:15:51.446  eflags:  none
00:15:51.446  sectype: none
00:15:51.446  =====Discovery Log Entry 3======
00:15:51.446  trtype:  tcp
00:15:51.446  adrfam:  ipv4
00:15:51.446  subtype: nvme subsystem
00:15:51.446  treq:    not required
00:15:51.446  portid:  0
00:15:51.446  trsvcid: 4420
00:15:51.446  subnqn:  nqn.2016-06.io.spdk:cnode3
00:15:51.446  traddr:  10.0.0.2
00:15:51.446  eflags:  none
00:15:51.446  sectype: none
00:15:51.446  =====Discovery Log Entry 4======
00:15:51.446  trtype:  tcp
00:15:51.446  adrfam:  ipv4
00:15:51.446  subtype: nvme subsystem
00:15:51.446  treq:    not required
00:15:51.446  portid:  0
00:15:51.446  trsvcid: 4420
00:15:51.446  subnqn:  nqn.2016-06.io.spdk:cnode4
00:15:51.446  traddr:  10.0.0.2
00:15:51.446  eflags:  none
00:15:51.446  sectype: none
00:15:51.446  =====Discovery Log Entry 5======
00:15:51.446  trtype:  tcp
00:15:51.446  adrfam:  ipv4
00:15:51.446  subtype: discovery subsystem referral
00:15:51.446  treq:    not required
00:15:51.446  portid:  0
00:15:51.446  trsvcid: 4430
00:15:51.446  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:15:51.446  traddr:  10.0.0.2
00:15:51.446  eflags:  none
00:15:51.446  sectype: none
00:15:51.446   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC'
00:15:51.446  Perform nvmf subsystem discovery via RPC
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.447  [
00:15:51.447  {
00:15:51.447  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:15:51.447  "subtype": "Discovery",
00:15:51.447  "listen_addresses": [
00:15:51.447  {
00:15:51.447  "trtype": "TCP",
00:15:51.447  "adrfam": "IPv4",
00:15:51.447  "traddr": "10.0.0.2",
00:15:51.447  "trsvcid": "4420"
00:15:51.447  }
00:15:51.447  ],
00:15:51.447  "allow_any_host": true,
00:15:51.447  "hosts": []
00:15:51.447  },
00:15:51.447  {
00:15:51.447  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:15:51.447  "subtype": "NVMe",
00:15:51.447  "listen_addresses": [
00:15:51.447  {
00:15:51.447  "trtype": "TCP",
00:15:51.447  "adrfam": "IPv4",
00:15:51.447  "traddr": "10.0.0.2",
00:15:51.447  "trsvcid": "4420"
00:15:51.447  }
00:15:51.447  ],
00:15:51.447  "allow_any_host": true,
00:15:51.447  "hosts": [],
00:15:51.447  "serial_number": "SPDK00000000000001",
00:15:51.447  "model_number": "SPDK bdev Controller",
00:15:51.447  "max_namespaces": 32,
00:15:51.447  "min_cntlid": 1,
00:15:51.447  "max_cntlid": 65519,
00:15:51.447  "namespaces": [
00:15:51.447  {
00:15:51.447  "nsid": 1,
00:15:51.447  "bdev_name": "Null1",
00:15:51.447  "name": "Null1",
00:15:51.447  "nguid": "058C1BE8EFBA46E899B507C223970094",
00:15:51.447  "uuid": "058c1be8-efba-46e8-99b5-07c223970094"
00:15:51.447  }
00:15:51.447  ]
00:15:51.447  },
00:15:51.447  {
00:15:51.447  "nqn": "nqn.2016-06.io.spdk:cnode2",
00:15:51.447  "subtype": "NVMe",
00:15:51.447  "listen_addresses": [
00:15:51.447  {
00:15:51.447  "trtype": "TCP",
00:15:51.447  "adrfam": "IPv4",
00:15:51.447  "traddr": "10.0.0.2",
00:15:51.447  "trsvcid": "4420"
00:15:51.447  }
00:15:51.447  ],
00:15:51.447  "allow_any_host": true,
00:15:51.447  "hosts": [],
00:15:51.447  "serial_number": "SPDK00000000000002",
00:15:51.447  "model_number": "SPDK bdev Controller",
00:15:51.447  "max_namespaces": 32,
00:15:51.447  "min_cntlid": 1,
00:15:51.447  "max_cntlid": 65519,
00:15:51.447  "namespaces": [
00:15:51.447  {
00:15:51.447  "nsid": 1,
00:15:51.447  "bdev_name": "Null2",
00:15:51.447  "name": "Null2",
00:15:51.447  "nguid": "441433C45FB14F36938752223016D11E",
00:15:51.447  "uuid": "441433c4-5fb1-4f36-9387-52223016d11e"
00:15:51.447  }
00:15:51.447  ]
00:15:51.447  },
00:15:51.447  {
00:15:51.447  "nqn": "nqn.2016-06.io.spdk:cnode3",
00:15:51.447  "subtype": "NVMe",
00:15:51.447  "listen_addresses": [
00:15:51.447  {
00:15:51.447  "trtype": "TCP",
00:15:51.447  "adrfam": "IPv4",
00:15:51.447  "traddr": "10.0.0.2",
00:15:51.447  "trsvcid": "4420"
00:15:51.447  }
00:15:51.447  ],
00:15:51.447  "allow_any_host": true,
00:15:51.447  "hosts": [],
00:15:51.447  "serial_number": "SPDK00000000000003",
00:15:51.447  "model_number": "SPDK bdev Controller",
00:15:51.447  "max_namespaces": 32,
00:15:51.447  "min_cntlid": 1,
00:15:51.447  "max_cntlid": 65519,
00:15:51.447  "namespaces": [
00:15:51.447  {
00:15:51.447  "nsid": 1,
00:15:51.447  "bdev_name": "Null3",
00:15:51.447  "name": "Null3",
00:15:51.447  "nguid": "A38F953C3DCF4E6E879EB6E1A5B398B1",
00:15:51.447  "uuid": "a38f953c-3dcf-4e6e-879e-b6e1a5b398b1"
00:15:51.447  }
00:15:51.447  ]
00:15:51.447  },
00:15:51.447  {
00:15:51.447  "nqn": "nqn.2016-06.io.spdk:cnode4",
00:15:51.447  "subtype": "NVMe",
00:15:51.447  "listen_addresses": [
00:15:51.447  {
00:15:51.447  "trtype": "TCP",
00:15:51.447  "adrfam": "IPv4",
00:15:51.447  "traddr": "10.0.0.2",
00:15:51.447  "trsvcid": "4420"
00:15:51.447  }
00:15:51.447  ],
00:15:51.447  "allow_any_host": true,
00:15:51.447  "hosts": [],
00:15:51.447  "serial_number": "SPDK00000000000004",
00:15:51.447  "model_number": "SPDK bdev Controller",
00:15:51.447  "max_namespaces": 32,
00:15:51.447  "min_cntlid": 1,
00:15:51.447  "max_cntlid": 65519,
00:15:51.447  "namespaces": [
00:15:51.447  {
00:15:51.447  "nsid": 1,
00:15:51.447  "bdev_name": "Null4",
00:15:51.447  "name": "Null4",
00:15:51.447  "nguid": "A583C91441C04E2392B173F30EB560B0",
00:15:51.447  "uuid": "a583c914-41c0-4e23-92b1-73f30eb560b0"
00:15:51.447  }
00:15:51.447  ]
00:15:51.447  }
00:15:51.447  ]
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.447    23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4)
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.447   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.448   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.448    23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs
00:15:51.448    23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name'
00:15:51.448    23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:51.448    23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:51.448    23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:51.448   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs=
00:15:51.448   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']'
00:15:51.448   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT
00:15:51.448   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini
00:15:51.448   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup
00:15:51.448   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync
00:15:51.448   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:15:51.448   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e
00:15:51.448   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20}
00:15:51.448   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:15:51.448  rmmod nvme_tcp
00:15:51.448  rmmod nvme_fabrics
00:15:51.448  rmmod nvme_keyring
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 3006335 ']'
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 3006335
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3006335 ']'
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3006335
00:15:51.706    23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:51.706    23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3006335
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3006335'
00:15:51.706  killing process with pid 3006335
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3006335
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3006335
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:15:51.706   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore
00:15:51.707   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:15:51.707   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns
00:15:51.707   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:15:51.707   23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:15:51.707    23:57:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:15:54.297   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:15:54.297  
00:15:54.297  real	0m9.356s
00:15:54.297  user	0m5.813s
00:15:54.297  sys	0m4.772s
00:15:54.297   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:54.297   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x
00:15:54.297  ************************************
00:15:54.297  END TEST nvmf_target_discovery
00:15:54.297  ************************************
00:15:54.297   23:57:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp
00:15:54.297   23:57:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:15:54.297   23:57:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:54.297   23:57:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:15:54.297  ************************************
00:15:54.297  START TEST nvmf_referrals
00:15:54.297  ************************************
00:15:54.297   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp
00:15:54.297  * Looking for test storage...
00:15:54.297  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:15:54.297     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version
00:15:54.297     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-:
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-:
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<'
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:54.297    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:54.297     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1
00:15:54.297     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1
00:15:54.298     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:54.298     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1
00:15:54.298     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2
00:15:54.298     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2
00:15:54.298     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:54.298     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:15:54.298  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:54.298  		--rc genhtml_branch_coverage=1
00:15:54.298  		--rc genhtml_function_coverage=1
00:15:54.298  		--rc genhtml_legend=1
00:15:54.298  		--rc geninfo_all_blocks=1
00:15:54.298  		--rc geninfo_unexecuted_blocks=1
00:15:54.298  		
00:15:54.298  		'
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:15:54.298  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:54.298  		--rc genhtml_branch_coverage=1
00:15:54.298  		--rc genhtml_function_coverage=1
00:15:54.298  		--rc genhtml_legend=1
00:15:54.298  		--rc geninfo_all_blocks=1
00:15:54.298  		--rc geninfo_unexecuted_blocks=1
00:15:54.298  		
00:15:54.298  		'
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:15:54.298  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:54.298  		--rc genhtml_branch_coverage=1
00:15:54.298  		--rc genhtml_function_coverage=1
00:15:54.298  		--rc genhtml_legend=1
00:15:54.298  		--rc geninfo_all_blocks=1
00:15:54.298  		--rc geninfo_unexecuted_blocks=1
00:15:54.298  		
00:15:54.298  		'
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:15:54.298  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:54.298  		--rc genhtml_branch_coverage=1
00:15:54.298  		--rc genhtml_function_coverage=1
00:15:54.298  		--rc genhtml_legend=1
00:15:54.298  		--rc geninfo_all_blocks=1
00:15:54.298  		--rc geninfo_unexecuted_blocks=1
00:15:54.298  		
00:15:54.298  		'
00:15:54.298   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:15:54.298     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:15:54.298     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:15:54.298     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob
00:15:54.298     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:15:54.298     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:15:54.298     23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:15:54.298      23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:54.298      23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:54.298      23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:54.298      23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH
00:15:54.298      23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:15:54.298  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:15:54.298    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0
00:15:54.298   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2
00:15:54.298   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3
00:15:54.298   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4
00:15:54.298   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430
00:15:54.298   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery
00:15:54.299   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:15:54.299   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit
00:15:54.299   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:15:54.299   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:15:54.299   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs
00:15:54.299   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no
00:15:54.299   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns
00:15:54.299   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:15:54.299   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:15:54.299    23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:15:54.299   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:15:54.299   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:15:54.299   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable
00:15:54.299   23:57:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=()
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=()
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=()
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=()
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=()
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=()
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=()
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:16:00.870  Found 0000:af:00.0 (0x8086 - 0x159b)
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:16:00.870  Found 0000:af:00.1 (0x8086 - 0x159b)
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:16:00.870  Found net devices under 0000:af:00.0: cvl_0_0
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:16:00.870  Found net devices under 0000:af:00.1: cvl_0_1
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:16:00.870   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:16:00.871  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:00.871  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms
00:16:00.871  
00:16:00.871  --- 10.0.0.2 ping statistics ---
00:16:00.871  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:00.871  rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:16:00.871  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:00.871  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms
00:16:00.871  
00:16:00.871  --- 10.0.0.1 ping statistics ---
00:16:00.871  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:00.871  rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=3010306
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 3010306
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3010306 ']'
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:00.871  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:00.871   23:57:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.871  [2024-12-09 23:57:15.881634] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:16:00.871  [2024-12-09 23:57:15.881682] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:00.871  [2024-12-09 23:57:15.959854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:16:00.871  [2024-12-09 23:57:15.999584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:00.871  [2024-12-09 23:57:15.999621] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:00.871  [2024-12-09 23:57:15.999629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:16:00.871  [2024-12-09 23:57:15.999635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:16:00.871  [2024-12-09 23:57:15.999640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:00.871  [2024-12-09 23:57:16.001109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:16:00.871  [2024-12-09 23:57:16.001219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:16:00.871  [2024-12-09 23:57:16.001262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:00.871  [2024-12-09 23:57:16.001263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.871  [2024-12-09 23:57:16.150888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.871  [2024-12-09 23:57:16.176318] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 ***
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.871   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 ))
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]]
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]]
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 ))
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]]
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:00.872   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc
00:16:00.872    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort
00:16:00.872     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:01.131     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:01.131    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2
00:16:01.131   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]]
00:16:01.131    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme
00:16:01.131    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:16:01.131    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:16:01.131     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json
00:16:01.131     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:16:01.131     23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:16:01.131    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2
00:16:01.131   23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]]
00:16:01.131    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem'
00:16:01.131    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn
00:16:01.131    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem'
00:16:01.131    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json
00:16:01.131    23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")'
00:16:01.389   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]]
00:16:01.389    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral'
00:16:01.389    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn
00:16:01.389    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral'
00:16:01.389    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json
00:16:01.389    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")'
00:16:01.647   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]]
00:16:01.647   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1
00:16:01.647   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:01.647   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:01.647   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:01.647    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc
00:16:01.647    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]]
00:16:01.647     23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals
00:16:01.647     23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr'
00:16:01.647     23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:01.647     23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort
00:16:01.647     23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:01.647     23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:01.647    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2
00:16:01.647   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]]
00:16:01.647    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme
00:16:01.647    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:16:01.647    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:16:01.647     23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json
00:16:01.647     23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:16:01.647     23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:16:01.905    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2
00:16:01.905   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]]
00:16:01.905    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem'
00:16:01.905    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn
00:16:01.905    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem'
00:16:01.905    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json
00:16:01.905    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")'
00:16:01.905   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]]
00:16:01.905    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral'
00:16:01.905    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn
00:16:01.905    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral'
00:16:01.905    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json
00:16:01.905    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")'
00:16:02.163   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]]
00:16:02.163   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery
00:16:02.163   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:02.163   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:02.163   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:02.163    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals
00:16:02.163    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length
00:16:02.163    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:02.163    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:02.163    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:02.163   23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 ))
00:16:02.163    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme
00:16:02.163    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]]
00:16:02.163    23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]]
00:16:02.163     23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json
00:16:02.163     23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr'
00:16:02.163     23:57:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort
00:16:02.421    23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo
00:16:02.421   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]]
00:16:02.421   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT
00:16:02.421   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini
00:16:02.421   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20}
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:16:02.422  rmmod nvme_tcp
00:16:02.422  rmmod nvme_fabrics
00:16:02.422  rmmod nvme_keyring
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 3010306 ']'
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 3010306
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3010306 ']'
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3010306
00:16:02.422    23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:02.422    23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3010306
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3010306'
00:16:02.422  killing process with pid 3010306
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3010306
00:16:02.422   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3010306
00:16:02.681   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:16:02.681   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:16:02.681   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:16:02.681   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr
00:16:02.681   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save
00:16:02.681   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:16:02.681   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore
00:16:02.681   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:16:02.681   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns
00:16:02.681   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:02.681   23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:02.681    23:57:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:04.586   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:16:04.586  
00:16:04.586  real	0m10.754s
00:16:04.586  user	0m11.986s
00:16:04.586  sys	0m5.179s
00:16:04.586   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:04.586   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x
00:16:04.586  ************************************
00:16:04.586  END TEST nvmf_referrals
00:16:04.586  ************************************
00:16:04.845   23:57:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp
00:16:04.845   23:57:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:16:04.845   23:57:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:04.845   23:57:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:16:04.845  ************************************
00:16:04.845  START TEST nvmf_connect_disconnect
00:16:04.845  ************************************
00:16:04.845   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp
00:16:04.845  * Looking for test storage...
00:16:04.845  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:04.845     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version
00:16:04.845     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-:
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-:
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<'
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:04.845     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1
00:16:04.845     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1
00:16:04.845     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:04.845     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1
00:16:04.845     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2
00:16:04.845     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2
00:16:04.845     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:04.845     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:04.845  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:04.845  		--rc genhtml_branch_coverage=1
00:16:04.845  		--rc genhtml_function_coverage=1
00:16:04.845  		--rc genhtml_legend=1
00:16:04.845  		--rc geninfo_all_blocks=1
00:16:04.845  		--rc geninfo_unexecuted_blocks=1
00:16:04.845  		
00:16:04.845  		'
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:04.845  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:04.845  		--rc genhtml_branch_coverage=1
00:16:04.845  		--rc genhtml_function_coverage=1
00:16:04.845  		--rc genhtml_legend=1
00:16:04.845  		--rc geninfo_all_blocks=1
00:16:04.845  		--rc geninfo_unexecuted_blocks=1
00:16:04.845  		
00:16:04.845  		'
00:16:04.845    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:04.845  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:04.846  		--rc genhtml_branch_coverage=1
00:16:04.846  		--rc genhtml_function_coverage=1
00:16:04.846  		--rc genhtml_legend=1
00:16:04.846  		--rc geninfo_all_blocks=1
00:16:04.846  		--rc geninfo_unexecuted_blocks=1
00:16:04.846  		
00:16:04.846  		'
00:16:04.846    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:04.846  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:04.846  		--rc genhtml_branch_coverage=1
00:16:04.846  		--rc genhtml_function_coverage=1
00:16:04.846  		--rc genhtml_legend=1
00:16:04.846  		--rc geninfo_all_blocks=1
00:16:04.846  		--rc geninfo_unexecuted_blocks=1
00:16:04.846  		
00:16:04.846  		'
00:16:04.846   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:16:04.846     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s
00:16:04.846    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:04.846    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:04.846    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:04.846    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:04.846    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:04.846    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:04.846    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:04.846    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:04.846    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:05.105     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:16:05.105     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob
00:16:05.105     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:05.105     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:05.105     23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:05.105      23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:05.105      23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:05.105      23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:05.105      23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH
00:16:05.105      23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:16:05.105  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:05.105    23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable
00:16:05.105   23:57:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:16:11.684   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:16:11.684   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=()
00:16:11.684   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs
00:16:11.684   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=()
00:16:11.684   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:16:11.684   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=()
00:16:11.684   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers
00:16:11.684   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=()
00:16:11.684   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs
00:16:11.684   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=()
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=()
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=()
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:16:11.685  Found 0000:af:00.0 (0x8086 - 0x159b)
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:16:11.685  Found 0000:af:00.1 (0x8086 - 0x159b)
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:16:11.685  Found net devices under 0000:af:00.0: cvl_0_0
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:16:11.685  Found net devices under 0000:af:00.1: cvl_0_1
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:11.685   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:16:11.686  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:11.686  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.425 ms
00:16:11.686  
00:16:11.686  --- 10.0.0.2 ping statistics ---
00:16:11.686  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:11.686  rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:16:11.686  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:11.686  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms
00:16:11.686  
00:16:11.686  --- 10.0.0.1 ping statistics ---
00:16:11.686  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:11.686  rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=3014325
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 3014325
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3014325 ']'
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:11.686  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:16:11.686  [2024-12-09 23:57:26.735634] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:16:11.686  [2024-12-09 23:57:26.735683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:11.686  [2024-12-09 23:57:26.814198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:16:11.686  [2024-12-09 23:57:26.856183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:11.686  [2024-12-09 23:57:26.856215] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:11.686  [2024-12-09 23:57:26.856226] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:16:11.686  [2024-12-09 23:57:26.856232] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:16:11.686  [2024-12-09 23:57:26.856237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:11.686  [2024-12-09 23:57:26.857596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:16:11.686  [2024-12-09 23:57:26.857708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:16:11.686  [2024-12-09 23:57:26.857812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:11.686  [2024-12-09 23:57:26.857813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:16:11.686  [2024-12-09 23:57:26.994177] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:11.686   23:57:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:11.686    23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512
00:16:11.686    23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:11.686    23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:16:11.686    23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:11.686   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:16:11.687  [2024-12-09 23:57:27.058689] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']'
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5
00:16:11.687   23:57:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x
00:16:15.165  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:18.449  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:21.733  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:25.016  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:27.547  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:27.547   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT
00:16:27.547   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini
00:16:27.547   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup
00:16:27.547   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync
00:16:27.547   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:16:27.547   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e
00:16:27.547   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20}
00:16:27.547   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:16:27.547  rmmod nvme_tcp
00:16:27.805  rmmod nvme_fabrics
00:16:27.805  rmmod nvme_keyring
00:16:27.805   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:16:27.805   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e
00:16:27.805   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0
00:16:27.805   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 3014325 ']'
00:16:27.805   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 3014325
00:16:27.805   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3014325 ']'
00:16:27.805   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3014325
00:16:27.805    23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname
00:16:27.805   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:27.805    23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3014325
00:16:27.805   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:27.805   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:27.805   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3014325'
00:16:27.805  killing process with pid 3014325
00:16:27.805   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3014325
00:16:27.805   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3014325
00:16:28.065   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:16:28.065   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:16:28.065   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:16:28.065   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr
00:16:28.065   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save
00:16:28.065   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:16:28.065   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore
00:16:28.065   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:16:28.065   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns
00:16:28.065   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:28.065   23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:28.065    23:57:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:29.968   23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:16:29.968  
00:16:29.968  real	0m25.234s
00:16:29.968  user	1m8.427s
00:16:29.968  sys	0m5.837s
00:16:29.968   23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:29.968   23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x
00:16:29.968  ************************************
00:16:29.968  END TEST nvmf_connect_disconnect
00:16:29.968  ************************************
00:16:29.968   23:57:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp
00:16:29.968   23:57:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:16:29.968   23:57:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:29.968   23:57:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:16:29.968  ************************************
00:16:29.968  START TEST nvmf_multitarget
00:16:29.968  ************************************
00:16:29.968   23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp
00:16:30.227  * Looking for test storage...
00:16:30.227  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:16:30.227    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:30.228     23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version
00:16:30.228     23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-:
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-:
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<'
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:30.228     23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1
00:16:30.228     23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1
00:16:30.228     23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:30.228     23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1
00:16:30.228     23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2
00:16:30.228     23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2
00:16:30.228     23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:30.228     23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:30.228  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:30.228  		--rc genhtml_branch_coverage=1
00:16:30.228  		--rc genhtml_function_coverage=1
00:16:30.228  		--rc genhtml_legend=1
00:16:30.228  		--rc geninfo_all_blocks=1
00:16:30.228  		--rc geninfo_unexecuted_blocks=1
00:16:30.228  		
00:16:30.228  		'
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:30.228  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:30.228  		--rc genhtml_branch_coverage=1
00:16:30.228  		--rc genhtml_function_coverage=1
00:16:30.228  		--rc genhtml_legend=1
00:16:30.228  		--rc geninfo_all_blocks=1
00:16:30.228  		--rc geninfo_unexecuted_blocks=1
00:16:30.228  		
00:16:30.228  		'
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:30.228  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:30.228  		--rc genhtml_branch_coverage=1
00:16:30.228  		--rc genhtml_function_coverage=1
00:16:30.228  		--rc genhtml_legend=1
00:16:30.228  		--rc geninfo_all_blocks=1
00:16:30.228  		--rc geninfo_unexecuted_blocks=1
00:16:30.228  		
00:16:30.228  		'
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:30.228  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:30.228  		--rc genhtml_branch_coverage=1
00:16:30.228  		--rc genhtml_function_coverage=1
00:16:30.228  		--rc genhtml_legend=1
00:16:30.228  		--rc geninfo_all_blocks=1
00:16:30.228  		--rc geninfo_unexecuted_blocks=1
00:16:30.228  		
00:16:30.228  		'
00:16:30.228   23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:16:30.228     23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:30.228    23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:30.228     23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:30.228    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:16:30.228    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:16:30.228    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:30.228    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:30.228    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:16:30.228    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:16:30.228    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:16:30.228     23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob
00:16:30.228     23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:30.228     23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:30.228     23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:30.228      23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:30.228      23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:30.228      23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:30.228      23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH
00:16:30.229      23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:30.229    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0
00:16:30.229    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:16:30.229    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:16:30.229    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:16:30.229    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:30.229    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:30.229    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:16:30.229  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:16:30.229    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:16:30.229    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:16:30.229    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0
00:16:30.229   23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py
00:16:30.229   23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit
00:16:30.229   23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:16:30.229   23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:30.229   23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs
00:16:30.229   23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no
00:16:30.229   23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns
00:16:30.229   23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:30.229   23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:30.229    23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:30.229   23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:16:30.229   23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:16:30.229   23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable
00:16:30.229   23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=()
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=()
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=()
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=()
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=()
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=()
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=()
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:16:36.796  Found 0000:af:00.0 (0x8086 - 0x159b)
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:16:36.796  Found 0000:af:00.1 (0x8086 - 0x159b)
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]]
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:16:36.796   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:16:36.797  Found net devices under 0000:af:00.0: cvl_0_0
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]]
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:16:36.797  Found net devices under 0000:af:00.1: cvl_0_1
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:16:36.797  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:36.797  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms
00:16:36.797  
00:16:36.797  --- 10.0.0.2 ping statistics ---
00:16:36.797  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:36.797  rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:16:36.797  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:36.797  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms
00:16:36.797  
00:16:36.797  --- 10.0.0.1 ping statistics ---
00:16:36.797  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:36.797  rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=3020588
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 3020588
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3020588 ']'
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:36.797  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:36.797   23:57:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:16:36.797  [2024-12-09 23:57:52.006601] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:16:36.797  [2024-12-09 23:57:52.006653] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:36.797  [2024-12-09 23:57:52.085263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:16:36.797  [2024-12-09 23:57:52.126683] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:36.797  [2024-12-09 23:57:52.126720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:36.797  [2024-12-09 23:57:52.126727] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:16:36.797  [2024-12-09 23:57:52.126733] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:16:36.797  [2024-12-09 23:57:52.126738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:36.797  [2024-12-09 23:57:52.128092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:16:36.797  [2024-12-09 23:57:52.128235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:16:36.797  [2024-12-09 23:57:52.128272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:36.797  [2024-12-09 23:57:52.128273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:16:36.797   23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:36.797   23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0
00:16:36.797   23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:16:36.797   23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:36.797   23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:16:36.798   23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:36.798   23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:16:36.798    23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:16:36.798    23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length
00:16:36.798   23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']'
00:16:36.798   23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32
00:16:36.798  "nvmf_tgt_1"
00:16:36.798   23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32
00:16:36.798  "nvmf_tgt_2"
00:16:36.798    23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:16:36.798    23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length
00:16:37.056   23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']'
00:16:37.056   23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1
00:16:37.056  true
00:16:37.056   23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2
00:16:37.056  true
00:16:37.315    23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets
00:16:37.315    23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length
00:16:37.315   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']'
00:16:37.315   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:16:37.315   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini
00:16:37.315   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup
00:16:37.315   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync
00:16:37.315   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:16:37.315   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e
00:16:37.315   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20}
00:16:37.315   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:16:37.315  rmmod nvme_tcp
00:16:37.315  rmmod nvme_fabrics
00:16:37.315  rmmod nvme_keyring
00:16:37.315   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:16:37.315   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e
00:16:37.315   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0
00:16:37.316   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 3020588 ']'
00:16:37.316   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 3020588
00:16:37.316   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3020588 ']'
00:16:37.316   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3020588
00:16:37.316    23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname
00:16:37.316   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:37.316    23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3020588
00:16:37.316   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:37.316   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:37.316   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3020588'
00:16:37.316  killing process with pid 3020588
00:16:37.316   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3020588
00:16:37.316   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3020588
00:16:37.575   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:16:37.575   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:16:37.575   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:16:37.575   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr
00:16:37.575   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save
00:16:37.575   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore
00:16:37.575   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:16:37.575   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:16:37.575   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns
00:16:37.575   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:37.575   23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:37.575    23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:40.109   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:16:40.109  
00:16:40.109  real	0m9.578s
00:16:40.109  user	0m7.252s
00:16:40.109  sys	0m4.896s
00:16:40.109   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:40.109   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x
00:16:40.109  ************************************
00:16:40.109  END TEST nvmf_multitarget
00:16:40.109  ************************************
00:16:40.109   23:57:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp
00:16:40.109   23:57:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:16:40.109   23:57:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:16:40.110   23:57:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:16:40.110  ************************************
00:16:40.110  START TEST nvmf_rpc
00:16:40.110  ************************************
00:16:40.110   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp
00:16:40.110  * Looking for test storage...
00:16:40.110  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:16:40.110  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:40.110  		--rc genhtml_branch_coverage=1
00:16:40.110  		--rc genhtml_function_coverage=1
00:16:40.110  		--rc genhtml_legend=1
00:16:40.110  		--rc geninfo_all_blocks=1
00:16:40.110  		--rc geninfo_unexecuted_blocks=1
00:16:40.110  		
00:16:40.110  		'
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:16:40.110  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:40.110  		--rc genhtml_branch_coverage=1
00:16:40.110  		--rc genhtml_function_coverage=1
00:16:40.110  		--rc genhtml_legend=1
00:16:40.110  		--rc geninfo_all_blocks=1
00:16:40.110  		--rc geninfo_unexecuted_blocks=1
00:16:40.110  		
00:16:40.110  		'
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:16:40.110  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:40.110  		--rc genhtml_branch_coverage=1
00:16:40.110  		--rc genhtml_function_coverage=1
00:16:40.110  		--rc genhtml_legend=1
00:16:40.110  		--rc geninfo_all_blocks=1
00:16:40.110  		--rc geninfo_unexecuted_blocks=1
00:16:40.110  		
00:16:40.110  		'
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:16:40.110  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:16:40.110  		--rc genhtml_branch_coverage=1
00:16:40.110  		--rc genhtml_function_coverage=1
00:16:40.110  		--rc genhtml_legend=1
00:16:40.110  		--rc geninfo_all_blocks=1
00:16:40.110  		--rc geninfo_unexecuted_blocks=1
00:16:40.110  		
00:16:40.110  		'
00:16:40.110   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:16:40.110    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:16:40.110     23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:16:40.110      23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:40.110      23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:40.111      23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:40.111      23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH
00:16:40.111      23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:16:40.111    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0
00:16:40.111    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:16:40.111    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:16:40.111    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:16:40.111    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:16:40.111    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:16:40.111    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:16:40.111  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:16:40.111    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:16:40.111    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:16:40.111    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0
00:16:40.111   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5
00:16:40.111   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit
00:16:40.111   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:16:40.111   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:16:40.111   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs
00:16:40.111   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no
00:16:40.111   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns
00:16:40.111   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:16:40.111   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:16:40.111    23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:16:40.111   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:16:40.111   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:16:40.111   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable
00:16:40.111   23:57:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=()
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=()
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=()
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=()
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=()
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=()
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=()
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:16:46.683   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:16:46.684  Found 0000:af:00.0 (0x8086 - 0x159b)
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:16:46.684  Found 0000:af:00.1 (0x8086 - 0x159b)
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:16:46.684  Found net devices under 0000:af:00.0: cvl_0_0
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:16:46.684  Found net devices under 0000:af:00.1: cvl_0_1
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:16:46.684  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:16:46.684  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms
00:16:46.684  
00:16:46.684  --- 10.0.0.2 ping statistics ---
00:16:46.684  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:46.684  rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:16:46.684  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:16:46.684  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms
00:16:46.684  
00:16:46.684  --- 10.0.0.1 ping statistics ---
00:16:46.684  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:16:46.684  rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=3024313
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 3024313
00:16:46.684   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3024313 ']'
00:16:46.685   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:46.685   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:46.685   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:46.685  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:46.685   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:46.685   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:46.685  [2024-12-09 23:58:01.748362] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:16:46.685  [2024-12-09 23:58:01.748407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:16:46.685  [2024-12-09 23:58:01.825717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:16:46.685  [2024-12-09 23:58:01.863921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:16:46.685  [2024-12-09 23:58:01.863961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:16:46.685  [2024-12-09 23:58:01.863968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:16:46.685  [2024-12-09 23:58:01.863974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:16:46.685  [2024-12-09 23:58:01.863979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:16:46.685  [2024-12-09 23:58:01.865432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:16:46.685  [2024-12-09 23:58:01.865541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:16:46.685  [2024-12-09 23:58:01.865628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:46.685  [2024-12-09 23:58:01.865629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:16:46.685   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:46.685   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0
00:16:46.685   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:16:46.685   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:46.685   23:58:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:46.685   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.685   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{
00:16:46.685  "tick_rate": 2100000000,
00:16:46.685  "poll_groups": [
00:16:46.685  {
00:16:46.685  "name": "nvmf_tgt_poll_group_000",
00:16:46.685  "admin_qpairs": 0,
00:16:46.685  "io_qpairs": 0,
00:16:46.685  "current_admin_qpairs": 0,
00:16:46.685  "current_io_qpairs": 0,
00:16:46.685  "pending_bdev_io": 0,
00:16:46.685  "completed_nvme_io": 0,
00:16:46.685  "transports": []
00:16:46.685  },
00:16:46.685  {
00:16:46.685  "name": "nvmf_tgt_poll_group_001",
00:16:46.685  "admin_qpairs": 0,
00:16:46.685  "io_qpairs": 0,
00:16:46.685  "current_admin_qpairs": 0,
00:16:46.685  "current_io_qpairs": 0,
00:16:46.685  "pending_bdev_io": 0,
00:16:46.685  "completed_nvme_io": 0,
00:16:46.685  "transports": []
00:16:46.685  },
00:16:46.685  {
00:16:46.685  "name": "nvmf_tgt_poll_group_002",
00:16:46.685  "admin_qpairs": 0,
00:16:46.685  "io_qpairs": 0,
00:16:46.685  "current_admin_qpairs": 0,
00:16:46.685  "current_io_qpairs": 0,
00:16:46.685  "pending_bdev_io": 0,
00:16:46.685  "completed_nvme_io": 0,
00:16:46.685  "transports": []
00:16:46.685  },
00:16:46.685  {
00:16:46.685  "name": "nvmf_tgt_poll_group_003",
00:16:46.685  "admin_qpairs": 0,
00:16:46.685  "io_qpairs": 0,
00:16:46.685  "current_admin_qpairs": 0,
00:16:46.685  "current_io_qpairs": 0,
00:16:46.685  "pending_bdev_io": 0,
00:16:46.685  "completed_nvme_io": 0,
00:16:46.685  "transports": []
00:16:46.685  }
00:16:46.685  ]
00:16:46.685  }'
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name'
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name'
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name'
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l
00:16:46.685   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 ))
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]'
00:16:46.685   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]]
00:16:46.685   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:16:46.685   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.685   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:46.685  [2024-12-09 23:58:02.123527] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:16:46.685   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.685   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{
00:16:46.685  "tick_rate": 2100000000,
00:16:46.685  "poll_groups": [
00:16:46.685  {
00:16:46.685  "name": "nvmf_tgt_poll_group_000",
00:16:46.685  "admin_qpairs": 0,
00:16:46.685  "io_qpairs": 0,
00:16:46.685  "current_admin_qpairs": 0,
00:16:46.685  "current_io_qpairs": 0,
00:16:46.685  "pending_bdev_io": 0,
00:16:46.685  "completed_nvme_io": 0,
00:16:46.685  "transports": [
00:16:46.685  {
00:16:46.685  "trtype": "TCP"
00:16:46.685  }
00:16:46.685  ]
00:16:46.685  },
00:16:46.685  {
00:16:46.685  "name": "nvmf_tgt_poll_group_001",
00:16:46.685  "admin_qpairs": 0,
00:16:46.685  "io_qpairs": 0,
00:16:46.685  "current_admin_qpairs": 0,
00:16:46.685  "current_io_qpairs": 0,
00:16:46.685  "pending_bdev_io": 0,
00:16:46.685  "completed_nvme_io": 0,
00:16:46.685  "transports": [
00:16:46.685  {
00:16:46.685  "trtype": "TCP"
00:16:46.685  }
00:16:46.685  ]
00:16:46.685  },
00:16:46.685  {
00:16:46.685  "name": "nvmf_tgt_poll_group_002",
00:16:46.685  "admin_qpairs": 0,
00:16:46.685  "io_qpairs": 0,
00:16:46.685  "current_admin_qpairs": 0,
00:16:46.685  "current_io_qpairs": 0,
00:16:46.685  "pending_bdev_io": 0,
00:16:46.685  "completed_nvme_io": 0,
00:16:46.685  "transports": [
00:16:46.685  {
00:16:46.685  "trtype": "TCP"
00:16:46.685  }
00:16:46.685  ]
00:16:46.685  },
00:16:46.685  {
00:16:46.685  "name": "nvmf_tgt_poll_group_003",
00:16:46.685  "admin_qpairs": 0,
00:16:46.685  "io_qpairs": 0,
00:16:46.685  "current_admin_qpairs": 0,
00:16:46.685  "current_io_qpairs": 0,
00:16:46.685  "pending_bdev_io": 0,
00:16:46.685  "completed_nvme_io": 0,
00:16:46.685  "transports": [
00:16:46.685  {
00:16:46.685  "trtype": "TCP"
00:16:46.685  }
00:16:46.685  ]
00:16:46.685  }
00:16:46.685  ]
00:16:46.685  }'
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs'
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs'
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs'
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:16:46.685   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 ))
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs'
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs'
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs'
00:16:46.685    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:16:46.685   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 ))
00:16:46.685   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']'
00:16:46.685   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:46.686  Malloc1
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:46.686  [2024-12-09 23:58:02.300003] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:46.686    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:46.686    23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]]
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420
00:16:46.686  [2024-12-09 23:58:02.328630] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562'
00:16:46.686  Failed to write to /dev/nvme-fabrics: Input/output error
00:16:46.686  could not add new controller: failed to write to nvme-fabrics device
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:46.686   23:58:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:16:47.621   23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME
00:16:47.621   23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:16:47.621   23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:47.621   23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:16:47.621   23:58:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:16:50.154   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:50.154    23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:50.154    23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:50.154   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:16:50.154   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:50.154   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:16:50.154   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:50.154  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:50.155    23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:50.155    23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]]
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:16:50.155  [2024-12-09 23:58:05.641525] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562'
00:16:50.155  Failed to write to /dev/nvme-fabrics: Input/output error
00:16:50.155  could not add new controller: failed to write to nvme-fabrics device
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:50.155   23:58:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:16:51.098   23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME
00:16:51.098   23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:16:51.098   23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:51.098   23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:16:51.098   23:58:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:16:52.999   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:52.999    23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:52.999    23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:52.999   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:16:53.000   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:53.000   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:16:53.000   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:53.259  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:53.259    23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:53.259  [2024-12-09 23:58:08.943554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:53.259   23:58:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:16:54.634   23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:16:54.634   23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:16:54.634   23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:54.634   23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:16:54.634   23:58:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:16:56.537   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:56.538    23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:56.538    23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:56.538  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:56.538  [2024-12-09 23:58:12.296607] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:56.538   23:58:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:16:57.914   23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:16:57.914   23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:16:57.914   23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:16:57.914   23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:16:57.914   23:58:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:16:59.816   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:16:59.816    23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:16:59.816    23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:16:59.816   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:16:59.816   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:16:59.816   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:16:59.816   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:16:59.816  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:16:59.816   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:16:59.816   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:16:59.816   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:16:59.816   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:59.816   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:16:59.816   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:16:59.816   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:59.817  [2024-12-09 23:58:15.638554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:59.817   23:58:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:17:01.192   23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:17:01.192   23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:17:01.192   23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:17:01.192   23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:17:01.192   23:58:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:17:03.227    23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:17:03.227    23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:17:03.227  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:03.227  [2024-12-09 23:58:18.973573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:03.227   23:58:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:17:04.609   23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:17:04.609   23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:17:04.609   23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:17:04.609   23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:17:04.609   23:58:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:17:06.512    23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:17:06.512    23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:17:06.512  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops)
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:06.512  [2024-12-09 23:58:22.305951] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:06.512   23:58:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:17:07.889   23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME
00:17:07.889   23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0
00:17:07.889   23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:17:07.889   23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:17:07.889   23:58:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2
00:17:09.792   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:17:09.792    23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:17:09.792    23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:17:09.792   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:17:09.792   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:17:09.792   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0
00:17:09.792   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:17:09.792  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:17:09.792   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:17:09.792   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0
00:17:09.792   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:17:09.792   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:17:09.792   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:17:09.792   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.051    23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.051   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052  [2024-12-09 23:58:25.704810] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052  [2024-12-09 23:58:25.752817] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052  [2024-12-09 23:58:25.800951] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052  [2024-12-09 23:58:25.849112] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.052   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops)
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.053  [2024-12-09 23:58:25.897289] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.053   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.312    23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats
00:17:10.312    23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.312    23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:10.312    23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{
00:17:10.312  "tick_rate": 2100000000,
00:17:10.312  "poll_groups": [
00:17:10.312  {
00:17:10.312  "name": "nvmf_tgt_poll_group_000",
00:17:10.312  "admin_qpairs": 2,
00:17:10.312  "io_qpairs": 168,
00:17:10.312  "current_admin_qpairs": 0,
00:17:10.312  "current_io_qpairs": 0,
00:17:10.312  "pending_bdev_io": 0,
00:17:10.312  "completed_nvme_io": 365,
00:17:10.312  "transports": [
00:17:10.312  {
00:17:10.312  "trtype": "TCP"
00:17:10.312  }
00:17:10.312  ]
00:17:10.312  },
00:17:10.312  {
00:17:10.312  "name": "nvmf_tgt_poll_group_001",
00:17:10.312  "admin_qpairs": 2,
00:17:10.312  "io_qpairs": 168,
00:17:10.312  "current_admin_qpairs": 0,
00:17:10.312  "current_io_qpairs": 0,
00:17:10.312  "pending_bdev_io": 0,
00:17:10.312  "completed_nvme_io": 170,
00:17:10.312  "transports": [
00:17:10.312  {
00:17:10.312  "trtype": "TCP"
00:17:10.312  }
00:17:10.312  ]
00:17:10.312  },
00:17:10.312  {
00:17:10.312  "name": "nvmf_tgt_poll_group_002",
00:17:10.312  "admin_qpairs": 1,
00:17:10.312  "io_qpairs": 168,
00:17:10.312  "current_admin_qpairs": 0,
00:17:10.312  "current_io_qpairs": 0,
00:17:10.312  "pending_bdev_io": 0,
00:17:10.312  "completed_nvme_io": 267,
00:17:10.312  "transports": [
00:17:10.312  {
00:17:10.312  "trtype": "TCP"
00:17:10.312  }
00:17:10.312  ]
00:17:10.312  },
00:17:10.312  {
00:17:10.312  "name": "nvmf_tgt_poll_group_003",
00:17:10.312  "admin_qpairs": 2,
00:17:10.312  "io_qpairs": 168,
00:17:10.312  "current_admin_qpairs": 0,
00:17:10.312  "current_io_qpairs": 0,
00:17:10.312  "pending_bdev_io": 0,
00:17:10.312  "completed_nvme_io": 220,
00:17:10.312  "transports": [
00:17:10.312  {
00:17:10.312  "trtype": "TCP"
00:17:10.312  }
00:17:10.312  ]
00:17:10.312  }
00:17:10.312  ]
00:17:10.312  }'
00:17:10.312    23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs'
00:17:10.312    23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs'
00:17:10.312    23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs'
00:17:10.312    23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:17:10.312   23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 ))
00:17:10.312    23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs'
00:17:10.312    23:58:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs'
00:17:10.312    23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs'
00:17:10.312    23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}'
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 ))
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']'
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20}
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:17:10.312  rmmod nvme_tcp
00:17:10.312  rmmod nvme_fabrics
00:17:10.312  rmmod nvme_keyring
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 3024313 ']'
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 3024313
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3024313 ']'
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3024313
00:17:10.312    23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:10.312    23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3024313
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3024313'
00:17:10.312  killing process with pid 3024313
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3024313
00:17:10.312   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3024313
00:17:10.572   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:17:10.572   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:17:10.572   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:17:10.572   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr
00:17:10.572   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save
00:17:10.572   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:17:10.572   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore
00:17:10.572   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:17:10.572   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns
00:17:10.572   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:10.572   23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:17:10.572    23:58:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:13.107   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:17:13.107  
00:17:13.107  real	0m32.961s
00:17:13.107  user	1m39.238s
00:17:13.107  sys	0m6.484s
00:17:13.107   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:13.107   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x
00:17:13.107  ************************************
00:17:13.107  END TEST nvmf_rpc
00:17:13.107  ************************************
00:17:13.107   23:58:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp
00:17:13.107   23:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:17:13.107   23:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:13.107   23:58:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:17:13.107  ************************************
00:17:13.107  START TEST nvmf_invalid
00:17:13.107  ************************************
00:17:13.107   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp
00:17:13.107  * Looking for test storage...
00:17:13.107  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:13.107     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version
00:17:13.107     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-:
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-:
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<'
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:13.107     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1
00:17:13.107     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1
00:17:13.107     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:13.107     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1
00:17:13.107     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2
00:17:13.107     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2
00:17:13.107     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:13.107     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:13.107    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:13.107  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:13.107  		--rc genhtml_branch_coverage=1
00:17:13.107  		--rc genhtml_function_coverage=1
00:17:13.107  		--rc genhtml_legend=1
00:17:13.107  		--rc geninfo_all_blocks=1
00:17:13.107  		--rc geninfo_unexecuted_blocks=1
00:17:13.107  		
00:17:13.108  		'
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:13.108  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:13.108  		--rc genhtml_branch_coverage=1
00:17:13.108  		--rc genhtml_function_coverage=1
00:17:13.108  		--rc genhtml_legend=1
00:17:13.108  		--rc geninfo_all_blocks=1
00:17:13.108  		--rc geninfo_unexecuted_blocks=1
00:17:13.108  		
00:17:13.108  		'
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:13.108  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:13.108  		--rc genhtml_branch_coverage=1
00:17:13.108  		--rc genhtml_function_coverage=1
00:17:13.108  		--rc genhtml_legend=1
00:17:13.108  		--rc geninfo_all_blocks=1
00:17:13.108  		--rc geninfo_unexecuted_blocks=1
00:17:13.108  		
00:17:13.108  		'
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:13.108  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:13.108  		--rc genhtml_branch_coverage=1
00:17:13.108  		--rc genhtml_function_coverage=1
00:17:13.108  		--rc genhtml_legend=1
00:17:13.108  		--rc geninfo_all_blocks=1
00:17:13.108  		--rc geninfo_unexecuted_blocks=1
00:17:13.108  		
00:17:13.108  		'
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:17:13.108     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:17:13.108     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:17:13.108     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob
00:17:13.108     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:17:13.108     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:17:13.108     23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:17:13.108      23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:13.108      23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:13.108      23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:13.108      23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH
00:17:13.108      23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:17:13.108  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:17:13.108    23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable
00:17:13.108   23:58:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:17:19.678   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:17:19.678   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=()
00:17:19.678   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs
00:17:19.678   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=()
00:17:19.678   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:17:19.678   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=()
00:17:19.678   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers
00:17:19.678   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=()
00:17:19.678   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs
00:17:19.678   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=()
00:17:19.678   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=()
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=()
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:17:19.679  Found 0000:af:00.0 (0x8086 - 0x159b)
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:17:19.679  Found 0000:af:00.1 (0x8086 - 0x159b)
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:17:19.679  Found net devices under 0000:af:00.0: cvl_0_0
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:17:19.679  Found net devices under 0000:af:00.1: cvl_0_1
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:17:19.679   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:17:19.680  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:17:19.680  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms
00:17:19.680  
00:17:19.680  --- 10.0.0.2 ping statistics ---
00:17:19.680  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:17:19.680  rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:17:19.680  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:17:19.680  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms
00:17:19.680  
00:17:19.680  --- 10.0.0.1 ping statistics ---
00:17:19.680  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:17:19.680  rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=3031967
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 3031967
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3031967 ']'
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:19.680  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:19.680   23:58:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:17:19.680  [2024-12-09 23:58:34.767914] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:17:19.680  [2024-12-09 23:58:34.767957] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:19.680  [2024-12-09 23:58:34.844136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:17:19.680  [2024-12-09 23:58:34.885200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:17:19.680  [2024-12-09 23:58:34.885235] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:17:19.680  [2024-12-09 23:58:34.885243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:17:19.680  [2024-12-09 23:58:34.885248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:17:19.680  [2024-12-09 23:58:34.885254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:17:19.680  [2024-12-09 23:58:34.886681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:17:19.680  [2024-12-09 23:58:34.886786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:17:19.680  [2024-12-09 23:58:34.886895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:17:19.680  [2024-12-09 23:58:34.886897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:17:19.938   23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:19.938   23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0
00:17:19.938   23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:17:19.938   23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:19.938   23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:17:19.938   23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:17:19.938   23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT
00:17:19.938    23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode13464
00:17:20.196  [2024-12-09 23:58:35.814561] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar
00:17:20.196   23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request:
00:17:20.196  {
00:17:20.196    "nqn": "nqn.2016-06.io.spdk:cnode13464",
00:17:20.196    "tgt_name": "foobar",
00:17:20.196    "method": "nvmf_create_subsystem",
00:17:20.196    "req_id": 1
00:17:20.196  }
00:17:20.196  Got JSON-RPC error response
00:17:20.196  response:
00:17:20.196  {
00:17:20.196    "code": -32603,
00:17:20.196    "message": "Unable to find target foobar"
00:17:20.196  }'
00:17:20.196   23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request:
00:17:20.196  {
00:17:20.196    "nqn": "nqn.2016-06.io.spdk:cnode13464",
00:17:20.196    "tgt_name": "foobar",
00:17:20.196    "method": "nvmf_create_subsystem",
00:17:20.196    "req_id": 1
00:17:20.196  }
00:17:20.196  Got JSON-RPC error response
00:17:20.196  response:
00:17:20.196  {
00:17:20.196    "code": -32603,
00:17:20.196    "message": "Unable to find target foobar"
00:17:20.196  } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]]
00:17:20.196     23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f'
00:17:20.196    23:58:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10342
00:17:20.196  [2024-12-09 23:58:36.015290] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10342: invalid serial number 'SPDKISFASTANDAWESOME'
00:17:20.196   23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request:
00:17:20.196  {
00:17:20.196    "nqn": "nqn.2016-06.io.spdk:cnode10342",
00:17:20.196    "serial_number": "SPDKISFASTANDAWESOME\u001f",
00:17:20.196    "method": "nvmf_create_subsystem",
00:17:20.196    "req_id": 1
00:17:20.196  }
00:17:20.196  Got JSON-RPC error response
00:17:20.196  response:
00:17:20.196  {
00:17:20.196    "code": -32602,
00:17:20.196    "message": "Invalid SN SPDKISFASTANDAWESOME\u001f"
00:17:20.196  }'
00:17:20.196   23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request:
00:17:20.196  {
00:17:20.196    "nqn": "nqn.2016-06.io.spdk:cnode10342",
00:17:20.196    "serial_number": "SPDKISFASTANDAWESOME\u001f",
00:17:20.196    "method": "nvmf_create_subsystem",
00:17:20.196    "req_id": 1
00:17:20.196  }
00:17:20.196  Got JSON-RPC error response
00:17:20.196  response:
00:17:20.196  {
00:17:20.196    "code": -32602,
00:17:20.196    "message": "Invalid SN SPDKISFASTANDAWESOME\u001f"
00:17:20.196  } == *\I\n\v\a\l\i\d\ \S\N* ]]
00:17:20.196     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f'
00:17:20.196    23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode13508
00:17:20.455  [2024-12-09 23:58:36.203877] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13508: invalid model number 'SPDK_Controller'
00:17:20.455   23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request:
00:17:20.455  {
00:17:20.455    "nqn": "nqn.2016-06.io.spdk:cnode13508",
00:17:20.455    "model_number": "SPDK_Controller\u001f",
00:17:20.455    "method": "nvmf_create_subsystem",
00:17:20.455    "req_id": 1
00:17:20.455  }
00:17:20.455  Got JSON-RPC error response
00:17:20.455  response:
00:17:20.455  {
00:17:20.455    "code": -32602,
00:17:20.455    "message": "Invalid MN SPDK_Controller\u001f"
00:17:20.455  }'
00:17:20.455   23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request:
00:17:20.455  {
00:17:20.455    "nqn": "nqn.2016-06.io.spdk:cnode13508",
00:17:20.455    "model_number": "SPDK_Controller\u001f",
00:17:20.455    "method": "nvmf_create_subsystem",
00:17:20.455    "req_id": 1
00:17:20.455  }
00:17:20.455  Got JSON-RPC error response
00:17:20.455  response:
00:17:20.455  {
00:17:20.455    "code": -32602,
00:17:20.455    "message": "Invalid MN SPDK_Controller\u001f"
00:17:20.455  } == *\I\n\v\a\l\i\d\ \M\N* ]]
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127')
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 ))
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.455       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93
00:17:20.455      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d'
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']'
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.455       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47
00:17:20.455      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f'
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.455       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51
00:17:20.455      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33'
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.455       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100
00:17:20.455      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64'
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.455       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63
00:17:20.455      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f'
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?'
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.455       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67
00:17:20.455      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43'
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.455       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61
00:17:20.455      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d'
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+==
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.455       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47
00:17:20.455      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f'
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.455       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69
00:17:20.455      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45'
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.455       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33
00:17:20.455      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21'
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!'
00:17:20.455     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.456     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.456       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35
00:17:20.456      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23'
00:17:20.456     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#'
00:17:20.456     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.456     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.714       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36
00:17:20.714      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.714       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116
00:17:20.714      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.714       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69
00:17:20.714      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.714       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107
00:17:20.714      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.714       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63
00:17:20.714      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.714       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65
00:17:20.714      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.714       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60
00:17:20.714      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.714       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38
00:17:20.714      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.714       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43
00:17:20.714      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.714       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119
00:17:20.714      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77'
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ] == \- ]]
00:17:20.714     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ']/3d?C=/E!#$tEk?A<&+w'
00:17:20.714    23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ']/3d?C=/E!#$tEk?A<&+w' nqn.2016-06.io.spdk:cnode28265
00:17:20.714  [2024-12-09 23:58:36.549036] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28265: invalid serial number ']/3d?C=/E!#$tEk?A<&+w'
00:17:20.973   23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request:
00:17:20.973  {
00:17:20.973    "nqn": "nqn.2016-06.io.spdk:cnode28265",
00:17:20.973    "serial_number": "]/3d?C=/E!#$tEk?A<&+w",
00:17:20.973    "method": "nvmf_create_subsystem",
00:17:20.973    "req_id": 1
00:17:20.973  }
00:17:20.973  Got JSON-RPC error response
00:17:20.973  response:
00:17:20.973  {
00:17:20.973    "code": -32602,
00:17:20.973    "message": "Invalid SN ]/3d?C=/E!#$tEk?A<&+w"
00:17:20.973  }'
00:17:20.973   23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request:
00:17:20.973  {
00:17:20.973    "nqn": "nqn.2016-06.io.spdk:cnode28265",
00:17:20.973    "serial_number": "]/3d?C=/E!#$tEk?A<&+w",
00:17:20.973    "method": "nvmf_create_subsystem",
00:17:20.973    "req_id": 1
00:17:20.973  }
00:17:20.973  Got JSON-RPC error response
00:17:20.973  response:
00:17:20.973  {
00:17:20.973    "code": -32602,
00:17:20.973    "message": "Invalid SN ]/3d?C=/E!#$tEk?A<&+w"
00:17:20.973  } == *\I\n\v\a\l\i\d\ \S\N* ]]
00:17:20.973     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41
00:17:20.973     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll
00:17:20.973     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127')
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=-
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+==
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=.
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82
00:17:20.974      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52'
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.974     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.974       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<'
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:20.975     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:20.975       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85
00:17:20.975      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55'
00:17:21.233     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U
00:17:21.233     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:21.233     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:21.233       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62
00:17:21.233      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e'
00:17:21.233     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>'
00:17:21.233     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:21.233     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:21.233       23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53
00:17:21.233      23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35'
00:17:21.233     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5
00:17:21.233     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ ))
00:17:21.233     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length ))
00:17:21.233     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 9 == \- ]]
00:17:21.233     23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '9^o8ba~Ra)H-=l.V!TRRHx)G\ecTWBF}?Af_4<U>5'
00:17:21.233    23:58:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '9^o8ba~Ra)H-=l.V!TRRHx)G\ecTWBF}?Af_4<U>5' nqn.2016-06.io.spdk:cnode19788
00:17:21.233  [2024-12-09 23:58:37.018599] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19788: invalid model number '9^o8ba~Ra)H-=l.V!TRRHx)G\ecTWBF}?Af_4<U>5'
00:17:21.233   23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request:
00:17:21.233  {
00:17:21.233    "nqn": "nqn.2016-06.io.spdk:cnode19788",
00:17:21.233    "model_number": "9^o8ba~Ra)H-=l.V!TRRHx)G\\ecTWBF}?Af_4<U>5",
00:17:21.233    "method": "nvmf_create_subsystem",
00:17:21.233    "req_id": 1
00:17:21.233  }
00:17:21.233  Got JSON-RPC error response
00:17:21.233  response:
00:17:21.233  {
00:17:21.233    "code": -32602,
00:17:21.234    "message": "Invalid MN 9^o8ba~Ra)H-=l.V!TRRHx)G\\ecTWBF}?Af_4<U>5"
00:17:21.234  }'
00:17:21.234   23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request:
00:17:21.234  {
00:17:21.234    "nqn": "nqn.2016-06.io.spdk:cnode19788",
00:17:21.234    "model_number": "9^o8ba~Ra)H-=l.V!TRRHx)G\\ecTWBF}?Af_4<U>5",
00:17:21.234    "method": "nvmf_create_subsystem",
00:17:21.234    "req_id": 1
00:17:21.234  }
00:17:21.234  Got JSON-RPC error response
00:17:21.234  response:
00:17:21.234  {
00:17:21.234    "code": -32602,
00:17:21.234    "message": "Invalid MN 9^o8ba~Ra)H-=l.V!TRRHx)G\\ecTWBF}?Af_4<U>5"
00:17:21.234  } == *\I\n\v\a\l\i\d\ \M\N* ]]
00:17:21.234   23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp
00:17:21.492  [2024-12-09 23:58:37.211301] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:21.492   23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a
00:17:21.749   23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]]
00:17:21.749    23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo ''
00:17:21.749    23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1
00:17:21.749   23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=
00:17:21.749    23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421
00:17:21.749  [2024-12-09 23:58:37.605916] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2
00:17:22.007   23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request:
00:17:22.007  {
00:17:22.007    "nqn": "nqn.2016-06.io.spdk:cnode",
00:17:22.007    "listen_address": {
00:17:22.007      "trtype": "tcp",
00:17:22.007      "traddr": "",
00:17:22.007      "trsvcid": "4421"
00:17:22.007    },
00:17:22.007    "method": "nvmf_subsystem_remove_listener",
00:17:22.007    "req_id": 1
00:17:22.007  }
00:17:22.007  Got JSON-RPC error response
00:17:22.007  response:
00:17:22.007  {
00:17:22.007    "code": -32602,
00:17:22.007    "message": "Invalid parameters"
00:17:22.007  }'
00:17:22.007   23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request:
00:17:22.007  {
00:17:22.007    "nqn": "nqn.2016-06.io.spdk:cnode",
00:17:22.007    "listen_address": {
00:17:22.007      "trtype": "tcp",
00:17:22.007      "traddr": "",
00:17:22.007      "trsvcid": "4421"
00:17:22.007    },
00:17:22.007    "method": "nvmf_subsystem_remove_listener",
00:17:22.007    "req_id": 1
00:17:22.007  }
00:17:22.007  Got JSON-RPC error response
00:17:22.007  response:
00:17:22.007  {
00:17:22.007    "code": -32602,
00:17:22.007    "message": "Invalid parameters"
00:17:22.007  } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]]
00:17:22.007    23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14960 -i 0
00:17:22.007  [2024-12-09 23:58:37.806574] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14960: invalid cntlid range [0-65519]
00:17:22.007   23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request:
00:17:22.007  {
00:17:22.007    "nqn": "nqn.2016-06.io.spdk:cnode14960",
00:17:22.007    "min_cntlid": 0,
00:17:22.007    "method": "nvmf_create_subsystem",
00:17:22.007    "req_id": 1
00:17:22.007  }
00:17:22.007  Got JSON-RPC error response
00:17:22.007  response:
00:17:22.007  {
00:17:22.007    "code": -32602,
00:17:22.007    "message": "Invalid cntlid range [0-65519]"
00:17:22.007  }'
00:17:22.007   23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request:
00:17:22.007  {
00:17:22.007    "nqn": "nqn.2016-06.io.spdk:cnode14960",
00:17:22.007    "min_cntlid": 0,
00:17:22.007    "method": "nvmf_create_subsystem",
00:17:22.007    "req_id": 1
00:17:22.007  }
00:17:22.007  Got JSON-RPC error response
00:17:22.007  response:
00:17:22.007  {
00:17:22.007    "code": -32602,
00:17:22.007    "message": "Invalid cntlid range [0-65519]"
00:17:22.007  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:17:22.007    23:58:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode239 -i 65520
00:17:22.264  [2024-12-09 23:58:38.015339] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode239: invalid cntlid range [65520-65519]
00:17:22.264   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request:
00:17:22.264  {
00:17:22.264    "nqn": "nqn.2016-06.io.spdk:cnode239",
00:17:22.264    "min_cntlid": 65520,
00:17:22.264    "method": "nvmf_create_subsystem",
00:17:22.264    "req_id": 1
00:17:22.264  }
00:17:22.264  Got JSON-RPC error response
00:17:22.264  response:
00:17:22.264  {
00:17:22.264    "code": -32602,
00:17:22.264    "message": "Invalid cntlid range [65520-65519]"
00:17:22.264  }'
00:17:22.264   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request:
00:17:22.264  {
00:17:22.264    "nqn": "nqn.2016-06.io.spdk:cnode239",
00:17:22.264    "min_cntlid": 65520,
00:17:22.264    "method": "nvmf_create_subsystem",
00:17:22.264    "req_id": 1
00:17:22.264  }
00:17:22.264  Got JSON-RPC error response
00:17:22.264  response:
00:17:22.264  {
00:17:22.264    "code": -32602,
00:17:22.264    "message": "Invalid cntlid range [65520-65519]"
00:17:22.264  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:17:22.264    23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23534 -I 0
00:17:22.522  [2024-12-09 23:58:38.232043] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23534: invalid cntlid range [1-0]
00:17:22.522   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request:
00:17:22.522  {
00:17:22.522    "nqn": "nqn.2016-06.io.spdk:cnode23534",
00:17:22.522    "max_cntlid": 0,
00:17:22.522    "method": "nvmf_create_subsystem",
00:17:22.522    "req_id": 1
00:17:22.522  }
00:17:22.522  Got JSON-RPC error response
00:17:22.522  response:
00:17:22.522  {
00:17:22.522    "code": -32602,
00:17:22.522    "message": "Invalid cntlid range [1-0]"
00:17:22.522  }'
00:17:22.522   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request:
00:17:22.522  {
00:17:22.522    "nqn": "nqn.2016-06.io.spdk:cnode23534",
00:17:22.522    "max_cntlid": 0,
00:17:22.522    "method": "nvmf_create_subsystem",
00:17:22.522    "req_id": 1
00:17:22.522  }
00:17:22.522  Got JSON-RPC error response
00:17:22.522  response:
00:17:22.522  {
00:17:22.522    "code": -32602,
00:17:22.522    "message": "Invalid cntlid range [1-0]"
00:17:22.522  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:17:22.522    23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7649 -I 65520
00:17:22.779  [2024-12-09 23:58:38.432724] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7649: invalid cntlid range [1-65520]
00:17:22.779   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request:
00:17:22.779  {
00:17:22.779    "nqn": "nqn.2016-06.io.spdk:cnode7649",
00:17:22.779    "max_cntlid": 65520,
00:17:22.780    "method": "nvmf_create_subsystem",
00:17:22.780    "req_id": 1
00:17:22.780  }
00:17:22.780  Got JSON-RPC error response
00:17:22.780  response:
00:17:22.780  {
00:17:22.780    "code": -32602,
00:17:22.780    "message": "Invalid cntlid range [1-65520]"
00:17:22.780  }'
00:17:22.780   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request:
00:17:22.780  {
00:17:22.780    "nqn": "nqn.2016-06.io.spdk:cnode7649",
00:17:22.780    "max_cntlid": 65520,
00:17:22.780    "method": "nvmf_create_subsystem",
00:17:22.780    "req_id": 1
00:17:22.780  }
00:17:22.780  Got JSON-RPC error response
00:17:22.780  response:
00:17:22.780  {
00:17:22.780    "code": -32602,
00:17:22.780    "message": "Invalid cntlid range [1-65520]"
00:17:22.780  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:17:22.780    23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10102 -i 6 -I 5
00:17:22.780  [2024-12-09 23:58:38.637445] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10102: invalid cntlid range [6-5]
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request:
00:17:23.038  {
00:17:23.038    "nqn": "nqn.2016-06.io.spdk:cnode10102",
00:17:23.038    "min_cntlid": 6,
00:17:23.038    "max_cntlid": 5,
00:17:23.038    "method": "nvmf_create_subsystem",
00:17:23.038    "req_id": 1
00:17:23.038  }
00:17:23.038  Got JSON-RPC error response
00:17:23.038  response:
00:17:23.038  {
00:17:23.038    "code": -32602,
00:17:23.038    "message": "Invalid cntlid range [6-5]"
00:17:23.038  }'
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request:
00:17:23.038  {
00:17:23.038    "nqn": "nqn.2016-06.io.spdk:cnode10102",
00:17:23.038    "min_cntlid": 6,
00:17:23.038    "max_cntlid": 5,
00:17:23.038    "method": "nvmf_create_subsystem",
00:17:23.038    "req_id": 1
00:17:23.038  }
00:17:23.038  Got JSON-RPC error response
00:17:23.038  response:
00:17:23.038  {
00:17:23.038    "code": -32602,
00:17:23.038    "message": "Invalid cntlid range [6-5]"
00:17:23.038  } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]]
00:17:23.038    23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request:
00:17:23.038  {
00:17:23.038    "name": "foobar",
00:17:23.038    "method": "nvmf_delete_target",
00:17:23.038    "req_id": 1
00:17:23.038  }
00:17:23.038  Got JSON-RPC error response
00:17:23.038  response:
00:17:23.038  {
00:17:23.038    "code": -32602,
00:17:23.038    "message": "The specified target doesn'\''t exist, cannot delete it."
00:17:23.038  }'
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request:
00:17:23.038  {
00:17:23.038    "name": "foobar",
00:17:23.038    "method": "nvmf_delete_target",
00:17:23.038    "req_id": 1
00:17:23.038  }
00:17:23.038  Got JSON-RPC error response
00:17:23.038  response:
00:17:23.038  {
00:17:23.038    "code": -32602,
00:17:23.038    "message": "The specified target doesn't exist, cannot delete it."
00:17:23.038  } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]]
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20}
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:17:23.038  rmmod nvme_tcp
00:17:23.038  rmmod nvme_fabrics
00:17:23.038  rmmod nvme_keyring
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 3031967 ']'
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 3031967
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3031967 ']'
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3031967
00:17:23.038    23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:23.038    23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3031967
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3031967'
00:17:23.038  killing process with pid 3031967
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3031967
00:17:23.038   23:58:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3031967
00:17:23.297   23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:17:23.297   23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:17:23.297   23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:17:23.297   23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr
00:17:23.297   23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore
00:17:23.297   23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save
00:17:23.297   23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:17:23.297   23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:17:23.297   23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns
00:17:23.297   23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:23.297   23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:17:23.297    23:58:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:25.835   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:17:25.835  
00:17:25.835  real	0m12.613s
00:17:25.835  user	0m20.968s
00:17:25.835  sys	0m5.373s
00:17:25.835   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:25.835   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x
00:17:25.835  ************************************
00:17:25.835  END TEST nvmf_invalid
00:17:25.835  ************************************
00:17:25.835   23:58:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp
00:17:25.835   23:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:17:25.835   23:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:25.835   23:58:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:17:25.835  ************************************
00:17:25.835  START TEST nvmf_connect_stress
00:17:25.835  ************************************
00:17:25.835   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp
00:17:25.835  * Looking for test storage...
00:17:25.835  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:25.835     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version
00:17:25.835     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-:
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-:
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<'
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:25.835     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1
00:17:25.835     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1
00:17:25.835     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:25.835     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1
00:17:25.835     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2
00:17:25.835     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2
00:17:25.835     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:25.835     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:25.835  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:25.835  		--rc genhtml_branch_coverage=1
00:17:25.835  		--rc genhtml_function_coverage=1
00:17:25.835  		--rc genhtml_legend=1
00:17:25.835  		--rc geninfo_all_blocks=1
00:17:25.835  		--rc geninfo_unexecuted_blocks=1
00:17:25.835  		
00:17:25.835  		'
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:25.835  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:25.835  		--rc genhtml_branch_coverage=1
00:17:25.835  		--rc genhtml_function_coverage=1
00:17:25.835  		--rc genhtml_legend=1
00:17:25.835  		--rc geninfo_all_blocks=1
00:17:25.835  		--rc geninfo_unexecuted_blocks=1
00:17:25.835  		
00:17:25.835  		'
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:25.835  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:25.835  		--rc genhtml_branch_coverage=1
00:17:25.835  		--rc genhtml_function_coverage=1
00:17:25.835  		--rc genhtml_legend=1
00:17:25.835  		--rc geninfo_all_blocks=1
00:17:25.835  		--rc geninfo_unexecuted_blocks=1
00:17:25.835  		
00:17:25.835  		'
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:25.835  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:25.835  		--rc genhtml_branch_coverage=1
00:17:25.835  		--rc genhtml_function_coverage=1
00:17:25.835  		--rc genhtml_legend=1
00:17:25.835  		--rc geninfo_all_blocks=1
00:17:25.835  		--rc geninfo_unexecuted_blocks=1
00:17:25.835  		
00:17:25.835  		'
00:17:25.835   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:17:25.835     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:17:25.835    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:17:25.836     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:17:25.836     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob
00:17:25.836     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:17:25.836     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:17:25.836     23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:17:25.836      23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:25.836      23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:25.836      23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:25.836      23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH
00:17:25.836      23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:17:25.836  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0
00:17:25.836   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit
00:17:25.836   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:17:25.836   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:17:25.836   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs
00:17:25.836   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no
00:17:25.836   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns
00:17:25.836   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:25.836   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:17:25.836    23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:25.836   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:17:25.836   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:17:25.836   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable
00:17:25.836   23:58:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=()
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=()
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=()
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=()
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=()
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=()
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=()
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:17:32.408   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:17:32.409  Found 0000:af:00.0 (0x8086 - 0x159b)
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:17:32.409  Found 0000:af:00.1 (0x8086 - 0x159b)
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:17:32.409  Found net devices under 0000:af:00.0: cvl_0_0
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:17:32.409  Found net devices under 0000:af:00.1: cvl_0_1
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:17:32.409   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:17:32.409  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:17:32.409  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms
00:17:32.409  
00:17:32.409  --- 10.0.0.2 ping statistics ---
00:17:32.409  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:17:32.409  rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:17:32.410  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:17:32.410  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms
00:17:32.410  
00:17:32.410  --- 10.0.0.1 ping statistics ---
00:17:32.410  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:17:32.410  rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=3036278
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 3036278
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3036278 ']'
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:32.410  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:32.410  [2024-12-09 23:58:47.380900] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:17:32.410  [2024-12-09 23:58:47.380943] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:32.410  [2024-12-09 23:58:47.458610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:17:32.410  [2024-12-09 23:58:47.498416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:17:32.410  [2024-12-09 23:58:47.498454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:17:32.410  [2024-12-09 23:58:47.498461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:17:32.410  [2024-12-09 23:58:47.498477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:17:32.410  [2024-12-09 23:58:47.498482] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:17:32.410  [2024-12-09 23:58:47.499806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:17:32.410  [2024-12-09 23:58:47.499909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:17:32.410  [2024-12-09 23:58:47.499911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:32.410  [2024-12-09 23:58:47.648554] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:32.410  [2024-12-09 23:58:47.672773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:32.410  NULL1
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3036301
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt
00:17:32.410    23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.410   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20)
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:32.411   23:58:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:32.411   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:32.411   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:32.411   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:32.411   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:32.411   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:32.669   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:32.669   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:32.669   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:32.669   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:32.669   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:32.927   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:32.927   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:32.927   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:32.927   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:32.927   23:58:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:33.493   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:33.493   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:33.493   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:33.493   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:33.493   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:33.752   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:33.752   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:33.752   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:33.752   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:33.752   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:34.010   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:34.010   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:34.010   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:34.010   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:34.011   23:58:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:34.269   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:34.269   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:34.269   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:34.269   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:34.269   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:34.526   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:34.526   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:34.526   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:34.526   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:34.526   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:35.092   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:35.092   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:35.092   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:35.092   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:35.092   23:58:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:35.350   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:35.350   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:35.350   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:35.350   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:35.350   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:35.608   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:35.608   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:35.608   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:35.608   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:35.608   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:35.866   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:35.866   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:35.866   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:35.866   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:35.866   23:58:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:36.432   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.432   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:36.432   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:36.432   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.432   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:36.690   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.690   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:36.690   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:36.690   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.690   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:36.948   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:36.949   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:36.949   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:36.949   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:36.949   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:37.206   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:37.206   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:37.206   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:37.206   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:37.206   23:58:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:37.464   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:37.464   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:37.464   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:37.464   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:37.464   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:38.030   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:38.030   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:38.030   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:38.031   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:38.031   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:38.288   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:38.288   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:38.288   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:38.288   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:38.288   23:58:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:38.546   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:38.546   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:38.546   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:38.546   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:38.546   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:38.804   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:38.804   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:38.804   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:38.804   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:38.804   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:39.369   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:39.369   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:39.369   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:39.369   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:39.369   23:58:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:39.627   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:39.627   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:39.627   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:39.627   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:39.627   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:39.885   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:39.885   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:39.885   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:39.885   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:39.885   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:40.142   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:40.142   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:40.142   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:40.142   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:40.142   23:58:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:40.400   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:40.400   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:40.400   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:40.400   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:40.400   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:40.966   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:40.966   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:40.966   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:40.966   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:40.966   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:41.224   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:41.224   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:41.224   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:41.224   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:41.224   23:58:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:41.482   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:41.482   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:41.482   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:41.482   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:41.482   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:41.740   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:41.741   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:41.741   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd
00:17:41.741   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:41.741   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:41.998  Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3036301
00:17:42.257  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3036301) - No such process
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3036301
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20}
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:17:42.257  rmmod nvme_tcp
00:17:42.257  rmmod nvme_fabrics
00:17:42.257  rmmod nvme_keyring
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 3036278 ']'
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 3036278
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3036278 ']'
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3036278
00:17:42.257    23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:42.257    23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3036278
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3036278'
00:17:42.257  killing process with pid 3036278
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3036278
00:17:42.257   23:58:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3036278
00:17:42.516   23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:17:42.516   23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:17:42.516   23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:17:42.516   23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr
00:17:42.516   23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:17:42.516   23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save
00:17:42.516   23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore
00:17:42.517   23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:17:42.517   23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns
00:17:42.517   23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:42.517   23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:17:42.517    23:58:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:44.422   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:17:44.422  
00:17:44.422  real	0m19.017s
00:17:44.422  user	0m39.474s
00:17:44.422  sys	0m8.500s
00:17:44.422   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:44.422   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x
00:17:44.422  ************************************
00:17:44.422  END TEST nvmf_connect_stress
00:17:44.422  ************************************
00:17:44.422   23:59:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp
00:17:44.422   23:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:17:44.422   23:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:44.422   23:59:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:17:44.422  ************************************
00:17:44.422  START TEST nvmf_fused_ordering
00:17:44.422  ************************************
00:17:44.422   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp
00:17:44.682  * Looking for test storage...
00:17:44.682  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:44.682     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version
00:17:44.682     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-:
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-:
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<'
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:44.682     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1
00:17:44.682     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1
00:17:44.682     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:44.682     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1
00:17:44.682     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2
00:17:44.682     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2
00:17:44.682     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:44.682     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:44.682  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:44.682  		--rc genhtml_branch_coverage=1
00:17:44.682  		--rc genhtml_function_coverage=1
00:17:44.682  		--rc genhtml_legend=1
00:17:44.682  		--rc geninfo_all_blocks=1
00:17:44.682  		--rc geninfo_unexecuted_blocks=1
00:17:44.682  		
00:17:44.682  		'
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:44.682  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:44.682  		--rc genhtml_branch_coverage=1
00:17:44.682  		--rc genhtml_function_coverage=1
00:17:44.682  		--rc genhtml_legend=1
00:17:44.682  		--rc geninfo_all_blocks=1
00:17:44.682  		--rc geninfo_unexecuted_blocks=1
00:17:44.682  		
00:17:44.682  		'
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:44.682  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:44.682  		--rc genhtml_branch_coverage=1
00:17:44.682  		--rc genhtml_function_coverage=1
00:17:44.682  		--rc genhtml_legend=1
00:17:44.682  		--rc geninfo_all_blocks=1
00:17:44.682  		--rc geninfo_unexecuted_blocks=1
00:17:44.682  		
00:17:44.682  		'
00:17:44.682    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:44.682  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:44.682  		--rc genhtml_branch_coverage=1
00:17:44.682  		--rc genhtml_function_coverage=1
00:17:44.682  		--rc genhtml_legend=1
00:17:44.683  		--rc geninfo_all_blocks=1
00:17:44.683  		--rc geninfo_unexecuted_blocks=1
00:17:44.683  		
00:17:44.683  		'
00:17:44.683   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:17:44.683     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:17:44.683     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:17:44.683     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob
00:17:44.683     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:17:44.683     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:17:44.683     23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:17:44.683      23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:44.683      23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:44.683      23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:44.683      23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH
00:17:44.683      23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:17:44.683  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0
00:17:44.683   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit
00:17:44.683   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:17:44.683   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:17:44.683   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs
00:17:44.683   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no
00:17:44.683   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns
00:17:44.683   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:44.683   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:17:44.683    23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:44.683   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:17:44.683   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:17:44.683   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable
00:17:44.683   23:59:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=()
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=()
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=()
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=()
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=()
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=()
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=()
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:17:51.253  Found 0000:af:00.0 (0x8086 - 0x159b)
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:17:51.253  Found 0000:af:00.1 (0x8086 - 0x159b)
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:17:51.253   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]]
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:17:51.254  Found net devices under 0000:af:00.0: cvl_0_0
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]]
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:17:51.254  Found net devices under 0000:af:00.1: cvl_0_1
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:17:51.254  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:17:51.254  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms
00:17:51.254  
00:17:51.254  --- 10.0.0.2 ping statistics ---
00:17:51.254  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:17:51.254  rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:17:51.254  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:17:51.254  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms
00:17:51.254  
00:17:51.254  --- 10.0.0.1 ping statistics ---
00:17:51.254  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:17:51.254  rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=3041564
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 3041564
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3041564 ']'
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:17:51.254  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable
00:17:51.254   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:17:51.254  [2024-12-09 23:59:06.441370] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:17:51.254  [2024-12-09 23:59:06.441416] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:17:51.254  [2024-12-09 23:59:06.519854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:17:51.255  [2024-12-09 23:59:06.556589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:17:51.255  [2024-12-09 23:59:06.556620] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:17:51.255  [2024-12-09 23:59:06.556627] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:17:51.255  [2024-12-09 23:59:06.556633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:17:51.255  [2024-12-09 23:59:06.556638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:17:51.255  [2024-12-09 23:59:06.557116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:17:51.255  [2024-12-09 23:59:06.704148] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:17:51.255  [2024-12-09 23:59:06.728342] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:17:51.255  NULL1
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:51.255   23:59:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1'
00:17:51.255  [2024-12-09 23:59:06.788083] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:17:51.255  [2024-12-09 23:59:06.788132] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3041586 ]
00:17:51.255  Attached to nqn.2016-06.io.spdk:cnode1
00:17:51.255    Namespace ID: 1 size: 1GB
00:17:51.255  fused_ordering(0)
00:17:51.255  fused_ordering(1)
00:17:51.255  fused_ordering(2)
00:17:51.255  fused_ordering(3)
00:17:51.255  fused_ordering(4)
00:17:51.255  fused_ordering(5)
00:17:51.255  fused_ordering(6)
00:17:51.255  fused_ordering(7)
00:17:51.255  fused_ordering(8)
00:17:51.255  fused_ordering(9)
00:17:51.255  fused_ordering(10)
00:17:51.255  fused_ordering(11)
00:17:51.255  fused_ordering(12)
00:17:51.255  fused_ordering(13)
00:17:51.255  fused_ordering(14)
00:17:51.255  fused_ordering(15)
00:17:51.255  fused_ordering(16)
00:17:51.255  fused_ordering(17)
00:17:51.255  fused_ordering(18)
00:17:51.255  fused_ordering(19)
00:17:51.255  fused_ordering(20)
00:17:51.255  fused_ordering(21)
00:17:51.255  fused_ordering(22)
00:17:51.255  fused_ordering(23)
00:17:51.255  fused_ordering(24)
00:17:51.255  fused_ordering(25)
00:17:51.255  fused_ordering(26)
00:17:51.255  fused_ordering(27)
00:17:51.255  fused_ordering(28)
00:17:51.255  fused_ordering(29)
00:17:51.255  fused_ordering(30)
00:17:51.255  fused_ordering(31)
00:17:51.255  fused_ordering(32)
00:17:51.255  fused_ordering(33)
00:17:51.255  fused_ordering(34)
00:17:51.255  fused_ordering(35)
00:17:51.255  fused_ordering(36)
00:17:51.255  fused_ordering(37)
00:17:51.255  fused_ordering(38)
00:17:51.255  fused_ordering(39)
00:17:51.255  fused_ordering(40)
00:17:51.255  fused_ordering(41)
00:17:51.255  fused_ordering(42)
00:17:51.255  fused_ordering(43)
00:17:51.255  fused_ordering(44)
00:17:51.255  fused_ordering(45)
00:17:51.255  fused_ordering(46)
00:17:51.255  fused_ordering(47)
00:17:51.255  fused_ordering(48)
00:17:51.255  fused_ordering(49)
00:17:51.255  fused_ordering(50)
00:17:51.255  fused_ordering(51)
00:17:51.255  fused_ordering(52)
00:17:51.255  fused_ordering(53)
00:17:51.255  fused_ordering(54)
00:17:51.255  fused_ordering(55)
00:17:51.255  fused_ordering(56)
00:17:51.255  fused_ordering(57)
00:17:51.255  fused_ordering(58)
00:17:51.255  fused_ordering(59)
00:17:51.255  fused_ordering(60)
00:17:51.255  fused_ordering(61)
00:17:51.255  fused_ordering(62)
00:17:51.255  fused_ordering(63)
00:17:51.255  fused_ordering(64)
00:17:51.255  fused_ordering(65)
00:17:51.255  fused_ordering(66)
00:17:51.255  fused_ordering(67)
00:17:51.255  fused_ordering(68)
00:17:51.255  fused_ordering(69)
00:17:51.255  fused_ordering(70)
00:17:51.255  fused_ordering(71)
00:17:51.255  fused_ordering(72)
00:17:51.255  fused_ordering(73)
00:17:51.255  fused_ordering(74)
00:17:51.255  fused_ordering(75)
00:17:51.255  fused_ordering(76)
00:17:51.255  fused_ordering(77)
00:17:51.255  fused_ordering(78)
00:17:51.255  fused_ordering(79)
00:17:51.255  fused_ordering(80)
00:17:51.255  fused_ordering(81)
00:17:51.255  fused_ordering(82)
00:17:51.255  fused_ordering(83)
00:17:51.255  fused_ordering(84)
00:17:51.255  fused_ordering(85)
00:17:51.255  fused_ordering(86)
00:17:51.255  fused_ordering(87)
00:17:51.255  fused_ordering(88)
00:17:51.255  fused_ordering(89)
00:17:51.255  fused_ordering(90)
00:17:51.255  fused_ordering(91)
00:17:51.255  fused_ordering(92)
00:17:51.255  fused_ordering(93)
00:17:51.255  fused_ordering(94)
00:17:51.255  fused_ordering(95)
00:17:51.255  fused_ordering(96)
00:17:51.255  fused_ordering(97)
00:17:51.255  fused_ordering(98)
00:17:51.255  fused_ordering(99)
00:17:51.255  fused_ordering(100)
00:17:51.255  fused_ordering(101)
00:17:51.255  fused_ordering(102)
00:17:51.255  fused_ordering(103)
00:17:51.255  fused_ordering(104)
00:17:51.255  fused_ordering(105)
00:17:51.255  fused_ordering(106)
00:17:51.255  fused_ordering(107)
00:17:51.255  fused_ordering(108)
00:17:51.255  fused_ordering(109)
00:17:51.255  fused_ordering(110)
00:17:51.255  fused_ordering(111)
00:17:51.255  fused_ordering(112)
00:17:51.255  fused_ordering(113)
00:17:51.255  fused_ordering(114)
00:17:51.255  fused_ordering(115)
00:17:51.255  fused_ordering(116)
00:17:51.255  fused_ordering(117)
00:17:51.255  fused_ordering(118)
00:17:51.255  fused_ordering(119)
00:17:51.255  fused_ordering(120)
00:17:51.255  fused_ordering(121)
00:17:51.255  fused_ordering(122)
00:17:51.255  fused_ordering(123)
00:17:51.255  fused_ordering(124)
00:17:51.255  fused_ordering(125)
00:17:51.256  fused_ordering(126)
00:17:51.256  fused_ordering(127)
00:17:51.256  fused_ordering(128)
00:17:51.256  fused_ordering(129)
00:17:51.256  fused_ordering(130)
00:17:51.256  fused_ordering(131)
00:17:51.256  fused_ordering(132)
00:17:51.256  fused_ordering(133)
00:17:51.256  fused_ordering(134)
00:17:51.256  fused_ordering(135)
00:17:51.256  fused_ordering(136)
00:17:51.256  fused_ordering(137)
00:17:51.256  fused_ordering(138)
00:17:51.256  fused_ordering(139)
00:17:51.256  fused_ordering(140)
00:17:51.256  fused_ordering(141)
00:17:51.256  fused_ordering(142)
00:17:51.256  fused_ordering(143)
00:17:51.256  fused_ordering(144)
00:17:51.256  fused_ordering(145)
00:17:51.256  fused_ordering(146)
00:17:51.256  fused_ordering(147)
00:17:51.256  fused_ordering(148)
00:17:51.256  fused_ordering(149)
00:17:51.256  fused_ordering(150)
00:17:51.256  fused_ordering(151)
00:17:51.256  fused_ordering(152)
00:17:51.256  fused_ordering(153)
00:17:51.256  fused_ordering(154)
00:17:51.256  fused_ordering(155)
00:17:51.256  fused_ordering(156)
00:17:51.256  fused_ordering(157)
00:17:51.256  fused_ordering(158)
00:17:51.256  fused_ordering(159)
00:17:51.256  fused_ordering(160)
00:17:51.256  fused_ordering(161)
00:17:51.256  fused_ordering(162)
00:17:51.256  fused_ordering(163)
00:17:51.256  fused_ordering(164)
00:17:51.256  fused_ordering(165)
00:17:51.256  fused_ordering(166)
00:17:51.256  fused_ordering(167)
00:17:51.256  fused_ordering(168)
00:17:51.256  fused_ordering(169)
00:17:51.256  fused_ordering(170)
00:17:51.256  fused_ordering(171)
00:17:51.256  fused_ordering(172)
00:17:51.256  fused_ordering(173)
00:17:51.256  fused_ordering(174)
00:17:51.256  fused_ordering(175)
00:17:51.256  fused_ordering(176)
00:17:51.256  fused_ordering(177)
00:17:51.256  fused_ordering(178)
00:17:51.256  fused_ordering(179)
00:17:51.256  fused_ordering(180)
00:17:51.256  fused_ordering(181)
00:17:51.256  fused_ordering(182)
00:17:51.256  fused_ordering(183)
00:17:51.256  fused_ordering(184)
00:17:51.256  fused_ordering(185)
00:17:51.256  fused_ordering(186)
00:17:51.256  fused_ordering(187)
00:17:51.256  fused_ordering(188)
00:17:51.256  fused_ordering(189)
00:17:51.256  fused_ordering(190)
00:17:51.256  fused_ordering(191)
00:17:51.256  fused_ordering(192)
00:17:51.256  fused_ordering(193)
00:17:51.256  fused_ordering(194)
00:17:51.256  fused_ordering(195)
00:17:51.256  fused_ordering(196)
00:17:51.256  fused_ordering(197)
00:17:51.256  fused_ordering(198)
00:17:51.256  fused_ordering(199)
00:17:51.256  fused_ordering(200)
00:17:51.256  fused_ordering(201)
00:17:51.256  fused_ordering(202)
00:17:51.256  fused_ordering(203)
00:17:51.256  fused_ordering(204)
00:17:51.256  fused_ordering(205)
00:17:51.515  fused_ordering(206)
00:17:51.515  fused_ordering(207)
00:17:51.515  fused_ordering(208)
00:17:51.515  fused_ordering(209)
00:17:51.515  fused_ordering(210)
00:17:51.515  fused_ordering(211)
00:17:51.515  fused_ordering(212)
00:17:51.515  fused_ordering(213)
00:17:51.515  fused_ordering(214)
00:17:51.515  fused_ordering(215)
00:17:51.515  fused_ordering(216)
00:17:51.515  fused_ordering(217)
00:17:51.515  fused_ordering(218)
00:17:51.515  fused_ordering(219)
00:17:51.515  fused_ordering(220)
00:17:51.515  fused_ordering(221)
00:17:51.515  fused_ordering(222)
00:17:51.515  fused_ordering(223)
00:17:51.515  fused_ordering(224)
00:17:51.515  fused_ordering(225)
00:17:51.515  fused_ordering(226)
00:17:51.515  fused_ordering(227)
00:17:51.515  fused_ordering(228)
00:17:51.515  fused_ordering(229)
00:17:51.515  fused_ordering(230)
00:17:51.515  fused_ordering(231)
00:17:51.515  fused_ordering(232)
00:17:51.515  fused_ordering(233)
00:17:51.515  fused_ordering(234)
00:17:51.515  fused_ordering(235)
00:17:51.515  fused_ordering(236)
00:17:51.515  fused_ordering(237)
00:17:51.515  fused_ordering(238)
00:17:51.515  fused_ordering(239)
00:17:51.515  fused_ordering(240)
00:17:51.515  fused_ordering(241)
00:17:51.515  fused_ordering(242)
00:17:51.515  fused_ordering(243)
00:17:51.515  fused_ordering(244)
00:17:51.515  fused_ordering(245)
00:17:51.515  fused_ordering(246)
00:17:51.515  fused_ordering(247)
00:17:51.515  fused_ordering(248)
00:17:51.515  fused_ordering(249)
00:17:51.515  fused_ordering(250)
00:17:51.515  fused_ordering(251)
00:17:51.515  fused_ordering(252)
00:17:51.515  fused_ordering(253)
00:17:51.515  fused_ordering(254)
00:17:51.515  fused_ordering(255)
00:17:51.515  fused_ordering(256)
00:17:51.515  fused_ordering(257)
00:17:51.515  fused_ordering(258)
00:17:51.515  fused_ordering(259)
00:17:51.515  fused_ordering(260)
00:17:51.515  fused_ordering(261)
00:17:51.515  fused_ordering(262)
00:17:51.515  fused_ordering(263)
00:17:51.515  fused_ordering(264)
00:17:51.515  fused_ordering(265)
00:17:51.515  fused_ordering(266)
00:17:51.515  fused_ordering(267)
00:17:51.515  fused_ordering(268)
00:17:51.515  fused_ordering(269)
00:17:51.515  fused_ordering(270)
00:17:51.515  fused_ordering(271)
00:17:51.515  fused_ordering(272)
00:17:51.515  fused_ordering(273)
00:17:51.515  fused_ordering(274)
00:17:51.515  fused_ordering(275)
00:17:51.515  fused_ordering(276)
00:17:51.515  fused_ordering(277)
00:17:51.515  fused_ordering(278)
00:17:51.515  fused_ordering(279)
00:17:51.515  fused_ordering(280)
00:17:51.515  fused_ordering(281)
00:17:51.515  fused_ordering(282)
00:17:51.515  fused_ordering(283)
00:17:51.515  fused_ordering(284)
00:17:51.515  fused_ordering(285)
00:17:51.515  fused_ordering(286)
00:17:51.515  fused_ordering(287)
00:17:51.515  fused_ordering(288)
00:17:51.515  fused_ordering(289)
00:17:51.516  fused_ordering(290)
00:17:51.516  fused_ordering(291)
00:17:51.516  fused_ordering(292)
00:17:51.516  fused_ordering(293)
00:17:51.516  fused_ordering(294)
00:17:51.516  fused_ordering(295)
00:17:51.516  fused_ordering(296)
00:17:51.516  fused_ordering(297)
00:17:51.516  fused_ordering(298)
00:17:51.516  fused_ordering(299)
00:17:51.516  fused_ordering(300)
00:17:51.516  fused_ordering(301)
00:17:51.516  fused_ordering(302)
00:17:51.516  fused_ordering(303)
00:17:51.516  fused_ordering(304)
00:17:51.516  fused_ordering(305)
00:17:51.516  fused_ordering(306)
00:17:51.516  fused_ordering(307)
00:17:51.516  fused_ordering(308)
00:17:51.516  fused_ordering(309)
00:17:51.516  fused_ordering(310)
00:17:51.516  fused_ordering(311)
00:17:51.516  fused_ordering(312)
00:17:51.516  fused_ordering(313)
00:17:51.516  fused_ordering(314)
00:17:51.516  fused_ordering(315)
00:17:51.516  fused_ordering(316)
00:17:51.516  fused_ordering(317)
00:17:51.516  fused_ordering(318)
00:17:51.516  fused_ordering(319)
00:17:51.516  fused_ordering(320)
00:17:51.516  fused_ordering(321)
00:17:51.516  fused_ordering(322)
00:17:51.516  fused_ordering(323)
00:17:51.516  fused_ordering(324)
00:17:51.516  fused_ordering(325)
00:17:51.516  fused_ordering(326)
00:17:51.516  fused_ordering(327)
00:17:51.516  fused_ordering(328)
00:17:51.516  fused_ordering(329)
00:17:51.516  fused_ordering(330)
00:17:51.516  fused_ordering(331)
00:17:51.516  fused_ordering(332)
00:17:51.516  fused_ordering(333)
00:17:51.516  fused_ordering(334)
00:17:51.516  fused_ordering(335)
00:17:51.516  fused_ordering(336)
00:17:51.516  fused_ordering(337)
00:17:51.516  fused_ordering(338)
00:17:51.516  fused_ordering(339)
00:17:51.516  fused_ordering(340)
00:17:51.516  fused_ordering(341)
00:17:51.516  fused_ordering(342)
00:17:51.516  fused_ordering(343)
00:17:51.516  fused_ordering(344)
00:17:51.516  fused_ordering(345)
00:17:51.516  fused_ordering(346)
00:17:51.516  fused_ordering(347)
00:17:51.516  fused_ordering(348)
00:17:51.516  fused_ordering(349)
00:17:51.516  fused_ordering(350)
00:17:51.516  fused_ordering(351)
00:17:51.516  fused_ordering(352)
00:17:51.516  fused_ordering(353)
00:17:51.516  fused_ordering(354)
00:17:51.516  fused_ordering(355)
00:17:51.516  fused_ordering(356)
00:17:51.516  fused_ordering(357)
00:17:51.516  fused_ordering(358)
00:17:51.516  fused_ordering(359)
00:17:51.516  fused_ordering(360)
00:17:51.516  fused_ordering(361)
00:17:51.516  fused_ordering(362)
00:17:51.516  fused_ordering(363)
00:17:51.516  fused_ordering(364)
00:17:51.516  fused_ordering(365)
00:17:51.516  fused_ordering(366)
00:17:51.516  fused_ordering(367)
00:17:51.516  fused_ordering(368)
00:17:51.516  fused_ordering(369)
00:17:51.516  fused_ordering(370)
00:17:51.516  fused_ordering(371)
00:17:51.516  fused_ordering(372)
00:17:51.516  fused_ordering(373)
00:17:51.516  fused_ordering(374)
00:17:51.516  fused_ordering(375)
00:17:51.516  fused_ordering(376)
00:17:51.516  fused_ordering(377)
00:17:51.516  fused_ordering(378)
00:17:51.516  fused_ordering(379)
00:17:51.516  fused_ordering(380)
00:17:51.516  fused_ordering(381)
00:17:51.516  fused_ordering(382)
00:17:51.516  fused_ordering(383)
00:17:51.516  fused_ordering(384)
00:17:51.516  fused_ordering(385)
00:17:51.516  fused_ordering(386)
00:17:51.516  fused_ordering(387)
00:17:51.516  fused_ordering(388)
00:17:51.516  fused_ordering(389)
00:17:51.516  fused_ordering(390)
00:17:51.516  fused_ordering(391)
00:17:51.516  fused_ordering(392)
00:17:51.516  fused_ordering(393)
00:17:51.516  fused_ordering(394)
00:17:51.516  fused_ordering(395)
00:17:51.516  fused_ordering(396)
00:17:51.516  fused_ordering(397)
00:17:51.516  fused_ordering(398)
00:17:51.516  fused_ordering(399)
00:17:51.516  fused_ordering(400)
00:17:51.516  fused_ordering(401)
00:17:51.516  fused_ordering(402)
00:17:51.516  fused_ordering(403)
00:17:51.516  fused_ordering(404)
00:17:51.516  fused_ordering(405)
00:17:51.516  fused_ordering(406)
00:17:51.516  fused_ordering(407)
00:17:51.516  fused_ordering(408)
00:17:51.516  fused_ordering(409)
00:17:51.516  fused_ordering(410)
00:17:52.083  fused_ordering(411)
00:17:52.083  fused_ordering(412)
00:17:52.083  fused_ordering(413)
00:17:52.083  fused_ordering(414)
00:17:52.083  fused_ordering(415)
00:17:52.083  fused_ordering(416)
00:17:52.083  fused_ordering(417)
00:17:52.083  fused_ordering(418)
00:17:52.083  fused_ordering(419)
00:17:52.083  fused_ordering(420)
00:17:52.083  fused_ordering(421)
00:17:52.083  fused_ordering(422)
00:17:52.083  fused_ordering(423)
00:17:52.083  fused_ordering(424)
00:17:52.083  fused_ordering(425)
00:17:52.083  fused_ordering(426)
00:17:52.083  fused_ordering(427)
00:17:52.083  fused_ordering(428)
00:17:52.083  fused_ordering(429)
00:17:52.083  fused_ordering(430)
00:17:52.083  fused_ordering(431)
00:17:52.083  fused_ordering(432)
00:17:52.083  fused_ordering(433)
00:17:52.083  fused_ordering(434)
00:17:52.083  fused_ordering(435)
00:17:52.083  fused_ordering(436)
00:17:52.083  fused_ordering(437)
00:17:52.083  fused_ordering(438)
00:17:52.083  fused_ordering(439)
00:17:52.083  fused_ordering(440)
00:17:52.083  fused_ordering(441)
00:17:52.083  fused_ordering(442)
00:17:52.083  fused_ordering(443)
00:17:52.083  fused_ordering(444)
00:17:52.083  fused_ordering(445)
00:17:52.083  fused_ordering(446)
00:17:52.083  fused_ordering(447)
00:17:52.083  fused_ordering(448)
00:17:52.083  fused_ordering(449)
00:17:52.083  fused_ordering(450)
00:17:52.083  fused_ordering(451)
00:17:52.083  fused_ordering(452)
00:17:52.083  fused_ordering(453)
00:17:52.083  fused_ordering(454)
00:17:52.083  fused_ordering(455)
00:17:52.083  fused_ordering(456)
00:17:52.083  fused_ordering(457)
00:17:52.083  fused_ordering(458)
00:17:52.083  fused_ordering(459)
00:17:52.083  fused_ordering(460)
00:17:52.083  fused_ordering(461)
00:17:52.083  fused_ordering(462)
00:17:52.083  fused_ordering(463)
00:17:52.083  fused_ordering(464)
00:17:52.083  fused_ordering(465)
00:17:52.084  fused_ordering(466)
00:17:52.084  fused_ordering(467)
00:17:52.084  fused_ordering(468)
00:17:52.084  fused_ordering(469)
00:17:52.084  fused_ordering(470)
00:17:52.084  fused_ordering(471)
00:17:52.084  fused_ordering(472)
00:17:52.084  fused_ordering(473)
00:17:52.084  fused_ordering(474)
00:17:52.084  fused_ordering(475)
00:17:52.084  fused_ordering(476)
00:17:52.084  fused_ordering(477)
00:17:52.084  fused_ordering(478)
00:17:52.084  fused_ordering(479)
00:17:52.084  fused_ordering(480)
00:17:52.084  fused_ordering(481)
00:17:52.084  fused_ordering(482)
00:17:52.084  fused_ordering(483)
00:17:52.084  fused_ordering(484)
00:17:52.084  fused_ordering(485)
00:17:52.084  fused_ordering(486)
00:17:52.084  fused_ordering(487)
00:17:52.084  fused_ordering(488)
00:17:52.084  fused_ordering(489)
00:17:52.084  fused_ordering(490)
00:17:52.084  fused_ordering(491)
00:17:52.084  fused_ordering(492)
00:17:52.084  fused_ordering(493)
00:17:52.084  fused_ordering(494)
00:17:52.084  fused_ordering(495)
00:17:52.084  fused_ordering(496)
00:17:52.084  fused_ordering(497)
00:17:52.084  fused_ordering(498)
00:17:52.084  fused_ordering(499)
00:17:52.084  fused_ordering(500)
00:17:52.084  fused_ordering(501)
00:17:52.084  fused_ordering(502)
00:17:52.084  fused_ordering(503)
00:17:52.084  fused_ordering(504)
00:17:52.084  fused_ordering(505)
00:17:52.084  fused_ordering(506)
00:17:52.084  fused_ordering(507)
00:17:52.084  fused_ordering(508)
00:17:52.084  fused_ordering(509)
00:17:52.084  fused_ordering(510)
00:17:52.084  fused_ordering(511)
00:17:52.084  fused_ordering(512)
00:17:52.084  fused_ordering(513)
00:17:52.084  fused_ordering(514)
00:17:52.084  fused_ordering(515)
00:17:52.084  fused_ordering(516)
00:17:52.084  fused_ordering(517)
00:17:52.084  fused_ordering(518)
00:17:52.084  fused_ordering(519)
00:17:52.084  fused_ordering(520)
00:17:52.084  fused_ordering(521)
00:17:52.084  fused_ordering(522)
00:17:52.084  fused_ordering(523)
00:17:52.084  fused_ordering(524)
00:17:52.084  fused_ordering(525)
00:17:52.084  fused_ordering(526)
00:17:52.084  fused_ordering(527)
00:17:52.084  fused_ordering(528)
00:17:52.084  fused_ordering(529)
00:17:52.084  fused_ordering(530)
00:17:52.084  fused_ordering(531)
00:17:52.084  fused_ordering(532)
00:17:52.084  fused_ordering(533)
00:17:52.084  fused_ordering(534)
00:17:52.084  fused_ordering(535)
00:17:52.084  fused_ordering(536)
00:17:52.084  fused_ordering(537)
00:17:52.084  fused_ordering(538)
00:17:52.084  fused_ordering(539)
00:17:52.084  fused_ordering(540)
00:17:52.084  fused_ordering(541)
00:17:52.084  fused_ordering(542)
00:17:52.084  fused_ordering(543)
00:17:52.084  fused_ordering(544)
00:17:52.084  fused_ordering(545)
00:17:52.084  fused_ordering(546)
00:17:52.084  fused_ordering(547)
00:17:52.084  fused_ordering(548)
00:17:52.084  fused_ordering(549)
00:17:52.084  fused_ordering(550)
00:17:52.084  fused_ordering(551)
00:17:52.084  fused_ordering(552)
00:17:52.084  fused_ordering(553)
00:17:52.084  fused_ordering(554)
00:17:52.084  fused_ordering(555)
00:17:52.084  fused_ordering(556)
00:17:52.084  fused_ordering(557)
00:17:52.084  fused_ordering(558)
00:17:52.084  fused_ordering(559)
00:17:52.084  fused_ordering(560)
00:17:52.084  fused_ordering(561)
00:17:52.084  fused_ordering(562)
00:17:52.084  fused_ordering(563)
00:17:52.084  fused_ordering(564)
00:17:52.084  fused_ordering(565)
00:17:52.084  fused_ordering(566)
00:17:52.084  fused_ordering(567)
00:17:52.084  fused_ordering(568)
00:17:52.084  fused_ordering(569)
00:17:52.084  fused_ordering(570)
00:17:52.084  fused_ordering(571)
00:17:52.084  fused_ordering(572)
00:17:52.084  fused_ordering(573)
00:17:52.084  fused_ordering(574)
00:17:52.084  fused_ordering(575)
00:17:52.084  fused_ordering(576)
00:17:52.084  fused_ordering(577)
00:17:52.084  fused_ordering(578)
00:17:52.084  fused_ordering(579)
00:17:52.084  fused_ordering(580)
00:17:52.084  fused_ordering(581)
00:17:52.084  fused_ordering(582)
00:17:52.084  fused_ordering(583)
00:17:52.084  fused_ordering(584)
00:17:52.084  fused_ordering(585)
00:17:52.084  fused_ordering(586)
00:17:52.084  fused_ordering(587)
00:17:52.084  fused_ordering(588)
00:17:52.084  fused_ordering(589)
00:17:52.084  fused_ordering(590)
00:17:52.084  fused_ordering(591)
00:17:52.084  fused_ordering(592)
00:17:52.084  fused_ordering(593)
00:17:52.084  fused_ordering(594)
00:17:52.084  fused_ordering(595)
00:17:52.084  fused_ordering(596)
00:17:52.084  fused_ordering(597)
00:17:52.084  fused_ordering(598)
00:17:52.084  fused_ordering(599)
00:17:52.084  fused_ordering(600)
00:17:52.084  fused_ordering(601)
00:17:52.084  fused_ordering(602)
00:17:52.084  fused_ordering(603)
00:17:52.084  fused_ordering(604)
00:17:52.084  fused_ordering(605)
00:17:52.084  fused_ordering(606)
00:17:52.084  fused_ordering(607)
00:17:52.084  fused_ordering(608)
00:17:52.084  fused_ordering(609)
00:17:52.084  fused_ordering(610)
00:17:52.084  fused_ordering(611)
00:17:52.084  fused_ordering(612)
00:17:52.084  fused_ordering(613)
00:17:52.084  fused_ordering(614)
00:17:52.084  fused_ordering(615)
00:17:52.343  fused_ordering(616)
00:17:52.343  fused_ordering(617)
00:17:52.343  fused_ordering(618)
00:17:52.343  fused_ordering(619)
00:17:52.343  fused_ordering(620)
00:17:52.343  fused_ordering(621)
00:17:52.343  fused_ordering(622)
00:17:52.343  fused_ordering(623)
00:17:52.343  fused_ordering(624)
00:17:52.343  fused_ordering(625)
00:17:52.343  fused_ordering(626)
00:17:52.343  fused_ordering(627)
00:17:52.343  fused_ordering(628)
00:17:52.343  fused_ordering(629)
00:17:52.343  fused_ordering(630)
00:17:52.343  fused_ordering(631)
00:17:52.343  fused_ordering(632)
00:17:52.343  fused_ordering(633)
00:17:52.343  fused_ordering(634)
00:17:52.343  fused_ordering(635)
00:17:52.343  fused_ordering(636)
00:17:52.343  fused_ordering(637)
00:17:52.343  fused_ordering(638)
00:17:52.343  fused_ordering(639)
00:17:52.343  fused_ordering(640)
00:17:52.343  fused_ordering(641)
00:17:52.343  fused_ordering(642)
00:17:52.343  fused_ordering(643)
00:17:52.343  fused_ordering(644)
00:17:52.343  fused_ordering(645)
00:17:52.343  fused_ordering(646)
00:17:52.343  fused_ordering(647)
00:17:52.343  fused_ordering(648)
00:17:52.343  fused_ordering(649)
00:17:52.343  fused_ordering(650)
00:17:52.343  fused_ordering(651)
00:17:52.343  fused_ordering(652)
00:17:52.343  fused_ordering(653)
00:17:52.343  fused_ordering(654)
00:17:52.343  fused_ordering(655)
00:17:52.343  fused_ordering(656)
00:17:52.343  fused_ordering(657)
00:17:52.343  fused_ordering(658)
00:17:52.343  fused_ordering(659)
00:17:52.343  fused_ordering(660)
00:17:52.343  fused_ordering(661)
00:17:52.343  fused_ordering(662)
00:17:52.343  fused_ordering(663)
00:17:52.343  fused_ordering(664)
00:17:52.343  fused_ordering(665)
00:17:52.343  fused_ordering(666)
00:17:52.343  fused_ordering(667)
00:17:52.343  fused_ordering(668)
00:17:52.343  fused_ordering(669)
00:17:52.343  fused_ordering(670)
00:17:52.343  fused_ordering(671)
00:17:52.343  fused_ordering(672)
00:17:52.343  fused_ordering(673)
00:17:52.343  fused_ordering(674)
00:17:52.343  fused_ordering(675)
00:17:52.343  fused_ordering(676)
00:17:52.343  fused_ordering(677)
00:17:52.343  fused_ordering(678)
00:17:52.343  fused_ordering(679)
00:17:52.343  fused_ordering(680)
00:17:52.343  fused_ordering(681)
00:17:52.343  fused_ordering(682)
00:17:52.343  fused_ordering(683)
00:17:52.343  fused_ordering(684)
00:17:52.343  fused_ordering(685)
00:17:52.343  fused_ordering(686)
00:17:52.343  fused_ordering(687)
00:17:52.343  fused_ordering(688)
00:17:52.343  fused_ordering(689)
00:17:52.343  fused_ordering(690)
00:17:52.343  fused_ordering(691)
00:17:52.343  fused_ordering(692)
00:17:52.343  fused_ordering(693)
00:17:52.343  fused_ordering(694)
00:17:52.343  fused_ordering(695)
00:17:52.343  fused_ordering(696)
00:17:52.343  fused_ordering(697)
00:17:52.343  fused_ordering(698)
00:17:52.343  fused_ordering(699)
00:17:52.343  fused_ordering(700)
00:17:52.343  fused_ordering(701)
00:17:52.343  fused_ordering(702)
00:17:52.343  fused_ordering(703)
00:17:52.343  fused_ordering(704)
00:17:52.343  fused_ordering(705)
00:17:52.343  fused_ordering(706)
00:17:52.343  fused_ordering(707)
00:17:52.343  fused_ordering(708)
00:17:52.343  fused_ordering(709)
00:17:52.343  fused_ordering(710)
00:17:52.343  fused_ordering(711)
00:17:52.343  fused_ordering(712)
00:17:52.343  fused_ordering(713)
00:17:52.343  fused_ordering(714)
00:17:52.343  fused_ordering(715)
00:17:52.343  fused_ordering(716)
00:17:52.343  fused_ordering(717)
00:17:52.343  fused_ordering(718)
00:17:52.343  fused_ordering(719)
00:17:52.343  fused_ordering(720)
00:17:52.343  fused_ordering(721)
00:17:52.343  fused_ordering(722)
00:17:52.343  fused_ordering(723)
00:17:52.343  fused_ordering(724)
00:17:52.343  fused_ordering(725)
00:17:52.343  fused_ordering(726)
00:17:52.343  fused_ordering(727)
00:17:52.343  fused_ordering(728)
00:17:52.344  fused_ordering(729)
00:17:52.344  fused_ordering(730)
00:17:52.344  fused_ordering(731)
00:17:52.344  fused_ordering(732)
00:17:52.344  fused_ordering(733)
00:17:52.344  fused_ordering(734)
00:17:52.344  fused_ordering(735)
00:17:52.344  fused_ordering(736)
00:17:52.344  fused_ordering(737)
00:17:52.344  fused_ordering(738)
00:17:52.344  fused_ordering(739)
00:17:52.344  fused_ordering(740)
00:17:52.344  fused_ordering(741)
00:17:52.344  fused_ordering(742)
00:17:52.344  fused_ordering(743)
00:17:52.344  fused_ordering(744)
00:17:52.344  fused_ordering(745)
00:17:52.344  fused_ordering(746)
00:17:52.344  fused_ordering(747)
00:17:52.344  fused_ordering(748)
00:17:52.344  fused_ordering(749)
00:17:52.344  fused_ordering(750)
00:17:52.344  fused_ordering(751)
00:17:52.344  fused_ordering(752)
00:17:52.344  fused_ordering(753)
00:17:52.344  fused_ordering(754)
00:17:52.344  fused_ordering(755)
00:17:52.344  fused_ordering(756)
00:17:52.344  fused_ordering(757)
00:17:52.344  fused_ordering(758)
00:17:52.344  fused_ordering(759)
00:17:52.344  fused_ordering(760)
00:17:52.344  fused_ordering(761)
00:17:52.344  fused_ordering(762)
00:17:52.344  fused_ordering(763)
00:17:52.344  fused_ordering(764)
00:17:52.344  fused_ordering(765)
00:17:52.344  fused_ordering(766)
00:17:52.344  fused_ordering(767)
00:17:52.344  fused_ordering(768)
00:17:52.344  fused_ordering(769)
00:17:52.344  fused_ordering(770)
00:17:52.344  fused_ordering(771)
00:17:52.344  fused_ordering(772)
00:17:52.344  fused_ordering(773)
00:17:52.344  fused_ordering(774)
00:17:52.344  fused_ordering(775)
00:17:52.344  fused_ordering(776)
00:17:52.344  fused_ordering(777)
00:17:52.344  fused_ordering(778)
00:17:52.344  fused_ordering(779)
00:17:52.344  fused_ordering(780)
00:17:52.344  fused_ordering(781)
00:17:52.344  fused_ordering(782)
00:17:52.344  fused_ordering(783)
00:17:52.344  fused_ordering(784)
00:17:52.344  fused_ordering(785)
00:17:52.344  fused_ordering(786)
00:17:52.344  fused_ordering(787)
00:17:52.344  fused_ordering(788)
00:17:52.344  fused_ordering(789)
00:17:52.344  fused_ordering(790)
00:17:52.344  fused_ordering(791)
00:17:52.344  fused_ordering(792)
00:17:52.344  fused_ordering(793)
00:17:52.344  fused_ordering(794)
00:17:52.344  fused_ordering(795)
00:17:52.344  fused_ordering(796)
00:17:52.344  fused_ordering(797)
00:17:52.344  fused_ordering(798)
00:17:52.344  fused_ordering(799)
00:17:52.344  fused_ordering(800)
00:17:52.344  fused_ordering(801)
00:17:52.344  fused_ordering(802)
00:17:52.344  fused_ordering(803)
00:17:52.344  fused_ordering(804)
00:17:52.344  fused_ordering(805)
00:17:52.344  fused_ordering(806)
00:17:52.344  fused_ordering(807)
00:17:52.344  fused_ordering(808)
00:17:52.344  fused_ordering(809)
00:17:52.344  fused_ordering(810)
00:17:52.344  fused_ordering(811)
00:17:52.344  fused_ordering(812)
00:17:52.344  fused_ordering(813)
00:17:52.344  fused_ordering(814)
00:17:52.344  fused_ordering(815)
00:17:52.344  fused_ordering(816)
00:17:52.344  fused_ordering(817)
00:17:52.344  fused_ordering(818)
00:17:52.344  fused_ordering(819)
00:17:52.344  fused_ordering(820)
00:17:52.912  fused_o[2024-12-09 23:59:08.506053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x69e340 is same with the state(6) to be set
00:17:52.912  rdering(821)
00:17:52.912  fused_ordering(822)
00:17:52.912  fused_ordering(823)
00:17:52.912  fused_ordering(824)
00:17:52.912  fused_ordering(825)
00:17:52.912  fused_ordering(826)
00:17:52.912  fused_ordering(827)
00:17:52.912  fused_ordering(828)
00:17:52.912  fused_ordering(829)
00:17:52.912  fused_ordering(830)
00:17:52.912  fused_ordering(831)
00:17:52.912  fused_ordering(832)
00:17:52.912  fused_ordering(833)
00:17:52.912  fused_ordering(834)
00:17:52.912  fused_ordering(835)
00:17:52.912  fused_ordering(836)
00:17:52.912  fused_ordering(837)
00:17:52.912  fused_ordering(838)
00:17:52.912  fused_ordering(839)
00:17:52.912  fused_ordering(840)
00:17:52.912  fused_ordering(841)
00:17:52.912  fused_ordering(842)
00:17:52.912  fused_ordering(843)
00:17:52.912  fused_ordering(844)
00:17:52.912  fused_ordering(845)
00:17:52.912  fused_ordering(846)
00:17:52.912  fused_ordering(847)
00:17:52.912  fused_ordering(848)
00:17:52.912  fused_ordering(849)
00:17:52.912  fused_ordering(850)
00:17:52.912  fused_ordering(851)
00:17:52.912  fused_ordering(852)
00:17:52.912  fused_ordering(853)
00:17:52.912  fused_ordering(854)
00:17:52.912  fused_ordering(855)
00:17:52.912  fused_ordering(856)
00:17:52.912  fused_ordering(857)
00:17:52.912  fused_ordering(858)
00:17:52.912  fused_ordering(859)
00:17:52.912  fused_ordering(860)
00:17:52.912  fused_ordering(861)
00:17:52.912  fused_ordering(862)
00:17:52.912  fused_ordering(863)
00:17:52.912  fused_ordering(864)
00:17:52.912  fused_ordering(865)
00:17:52.912  fused_ordering(866)
00:17:52.912  fused_ordering(867)
00:17:52.912  fused_ordering(868)
00:17:52.912  fused_ordering(869)
00:17:52.912  fused_ordering(870)
00:17:52.912  fused_ordering(871)
00:17:52.912  fused_ordering(872)
00:17:52.912  fused_ordering(873)
00:17:52.912  fused_ordering(874)
00:17:52.912  fused_ordering(875)
00:17:52.912  fused_ordering(876)
00:17:52.912  fused_ordering(877)
00:17:52.912  fused_ordering(878)
00:17:52.912  fused_ordering(879)
00:17:52.912  fused_ordering(880)
00:17:52.912  fused_ordering(881)
00:17:52.912  fused_ordering(882)
00:17:52.912  fused_ordering(883)
00:17:52.912  fused_ordering(884)
00:17:52.912  fused_ordering(885)
00:17:52.912  fused_ordering(886)
00:17:52.912  fused_ordering(887)
00:17:52.912  fused_ordering(888)
00:17:52.912  fused_ordering(889)
00:17:52.912  fused_ordering(890)
00:17:52.912  fused_ordering(891)
00:17:52.912  fused_ordering(892)
00:17:52.912  fused_ordering(893)
00:17:52.912  fused_ordering(894)
00:17:52.912  fused_ordering(895)
00:17:52.912  fused_ordering(896)
00:17:52.912  fused_ordering(897)
00:17:52.912  fused_ordering(898)
00:17:52.912  fused_ordering(899)
00:17:52.912  fused_ordering(900)
00:17:52.912  fused_ordering(901)
00:17:52.912  fused_ordering(902)
00:17:52.912  fused_ordering(903)
00:17:52.912  fused_ordering(904)
00:17:52.912  fused_ordering(905)
00:17:52.912  fused_ordering(906)
00:17:52.912  fused_ordering(907)
00:17:52.912  fused_ordering(908)
00:17:52.912  fused_ordering(909)
00:17:52.912  fused_ordering(910)
00:17:52.912  fused_ordering(911)
00:17:52.912  fused_ordering(912)
00:17:52.912  fused_ordering(913)
00:17:52.912  fused_ordering(914)
00:17:52.912  fused_ordering(915)
00:17:52.912  fused_ordering(916)
00:17:52.912  fused_ordering(917)
00:17:52.912  fused_ordering(918)
00:17:52.912  fused_ordering(919)
00:17:52.912  fused_ordering(920)
00:17:52.912  fused_ordering(921)
00:17:52.912  fused_ordering(922)
00:17:52.912  fused_ordering(923)
00:17:52.912  fused_ordering(924)
00:17:52.912  fused_ordering(925)
00:17:52.912  fused_ordering(926)
00:17:52.912  fused_ordering(927)
00:17:52.912  fused_ordering(928)
00:17:52.912  fused_ordering(929)
00:17:52.912  fused_ordering(930)
00:17:52.912  fused_ordering(931)
00:17:52.912  fused_ordering(932)
00:17:52.912  fused_ordering(933)
00:17:52.912  fused_ordering(934)
00:17:52.912  fused_ordering(935)
00:17:52.912  fused_ordering(936)
00:17:52.912  fused_ordering(937)
00:17:52.912  fused_ordering(938)
00:17:52.912  fused_ordering(939)
00:17:52.912  fused_ordering(940)
00:17:52.912  fused_ordering(941)
00:17:52.912  fused_ordering(942)
00:17:52.912  fused_ordering(943)
00:17:52.912  fused_ordering(944)
00:17:52.912  fused_ordering(945)
00:17:52.912  fused_ordering(946)
00:17:52.912  fused_ordering(947)
00:17:52.912  fused_ordering(948)
00:17:52.912  fused_ordering(949)
00:17:52.912  fused_ordering(950)
00:17:52.912  fused_ordering(951)
00:17:52.912  fused_ordering(952)
00:17:52.912  fused_ordering(953)
00:17:52.912  fused_ordering(954)
00:17:52.912  fused_ordering(955)
00:17:52.912  fused_ordering(956)
00:17:52.912  fused_ordering(957)
00:17:52.912  fused_ordering(958)
00:17:52.912  fused_ordering(959)
00:17:52.912  fused_ordering(960)
00:17:52.912  fused_ordering(961)
00:17:52.912  fused_ordering(962)
00:17:52.912  fused_ordering(963)
00:17:52.912  fused_ordering(964)
00:17:52.912  fused_ordering(965)
00:17:52.912  fused_ordering(966)
00:17:52.912  fused_ordering(967)
00:17:52.912  fused_ordering(968)
00:17:52.912  fused_ordering(969)
00:17:52.912  fused_ordering(970)
00:17:52.912  fused_ordering(971)
00:17:52.912  fused_ordering(972)
00:17:52.912  fused_ordering(973)
00:17:52.912  fused_ordering(974)
00:17:52.912  fused_ordering(975)
00:17:52.912  fused_ordering(976)
00:17:52.912  fused_ordering(977)
00:17:52.912  fused_ordering(978)
00:17:52.912  fused_ordering(979)
00:17:52.912  fused_ordering(980)
00:17:52.912  fused_ordering(981)
00:17:52.912  fused_ordering(982)
00:17:52.912  fused_ordering(983)
00:17:52.912  fused_ordering(984)
00:17:52.912  fused_ordering(985)
00:17:52.912  fused_ordering(986)
00:17:52.912  fused_ordering(987)
00:17:52.912  fused_ordering(988)
00:17:52.912  fused_ordering(989)
00:17:52.912  fused_ordering(990)
00:17:52.912  fused_ordering(991)
00:17:52.912  fused_ordering(992)
00:17:52.912  fused_ordering(993)
00:17:52.912  fused_ordering(994)
00:17:52.912  fused_ordering(995)
00:17:52.912  fused_ordering(996)
00:17:52.912  fused_ordering(997)
00:17:52.912  fused_ordering(998)
00:17:52.912  fused_ordering(999)
00:17:52.912  fused_ordering(1000)
00:17:52.912  fused_ordering(1001)
00:17:52.912  fused_ordering(1002)
00:17:52.912  fused_ordering(1003)
00:17:52.912  fused_ordering(1004)
00:17:52.912  fused_ordering(1005)
00:17:52.912  fused_ordering(1006)
00:17:52.912  fused_ordering(1007)
00:17:52.912  fused_ordering(1008)
00:17:52.912  fused_ordering(1009)
00:17:52.912  fused_ordering(1010)
00:17:52.912  fused_ordering(1011)
00:17:52.912  fused_ordering(1012)
00:17:52.912  fused_ordering(1013)
00:17:52.912  fused_ordering(1014)
00:17:52.912  fused_ordering(1015)
00:17:52.912  fused_ordering(1016)
00:17:52.912  fused_ordering(1017)
00:17:52.912  fused_ordering(1018)
00:17:52.912  fused_ordering(1019)
00:17:52.912  fused_ordering(1020)
00:17:52.912  fused_ordering(1021)
00:17:52.912  fused_ordering(1022)
00:17:52.912  fused_ordering(1023)
00:17:52.912   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT
00:17:52.912   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini
00:17:52.912   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup
00:17:52.912   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync
00:17:52.912   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:17:52.912   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20}
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:17:52.913  rmmod nvme_tcp
00:17:52.913  rmmod nvme_fabrics
00:17:52.913  rmmod nvme_keyring
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 3041564 ']'
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 3041564
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3041564 ']'
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3041564
00:17:52.913    23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:52.913    23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3041564
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3041564'
00:17:52.913  killing process with pid 3041564
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3041564
00:17:52.913   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3041564
00:17:53.172   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:17:53.172   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:17:53.172   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:17:53.172   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr
00:17:53.172   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save
00:17:53.172   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:17:53.172   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore
00:17:53.172   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:17:53.172   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns
00:17:53.172   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:53.172   23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:17:53.172    23:59:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:55.076   23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:17:55.076  
00:17:55.076  real	0m10.598s
00:17:55.076  user	0m4.936s
00:17:55.076  sys	0m5.763s
00:17:55.076   23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:55.076   23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x
00:17:55.076  ************************************
00:17:55.076  END TEST nvmf_fused_ordering
00:17:55.076  ************************************
00:17:55.076   23:59:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp
00:17:55.076   23:59:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:17:55.076   23:59:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:17:55.076   23:59:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:17:55.337  ************************************
00:17:55.337  START TEST nvmf_ns_masking
00:17:55.337  ************************************
00:17:55.337   23:59:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp
00:17:55.337  * Looking for test storage...
00:17:55.337  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:17:55.337     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version
00:17:55.337     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-:
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-:
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<'
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 ))
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:17:55.337     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1
00:17:55.337     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1
00:17:55.337     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:17:55.337     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1
00:17:55.337     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2
00:17:55.337     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2
00:17:55.337     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:17:55.337     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:17:55.337  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:55.337  		--rc genhtml_branch_coverage=1
00:17:55.337  		--rc genhtml_function_coverage=1
00:17:55.337  		--rc genhtml_legend=1
00:17:55.337  		--rc geninfo_all_blocks=1
00:17:55.337  		--rc geninfo_unexecuted_blocks=1
00:17:55.337  		
00:17:55.337  		'
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:17:55.337  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:55.337  		--rc genhtml_branch_coverage=1
00:17:55.337  		--rc genhtml_function_coverage=1
00:17:55.337  		--rc genhtml_legend=1
00:17:55.337  		--rc geninfo_all_blocks=1
00:17:55.337  		--rc geninfo_unexecuted_blocks=1
00:17:55.337  		
00:17:55.337  		'
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:17:55.337  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:55.337  		--rc genhtml_branch_coverage=1
00:17:55.337  		--rc genhtml_function_coverage=1
00:17:55.337  		--rc genhtml_legend=1
00:17:55.337  		--rc geninfo_all_blocks=1
00:17:55.337  		--rc geninfo_unexecuted_blocks=1
00:17:55.337  		
00:17:55.337  		'
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:17:55.337  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:17:55.337  		--rc genhtml_branch_coverage=1
00:17:55.337  		--rc genhtml_function_coverage=1
00:17:55.337  		--rc genhtml_legend=1
00:17:55.337  		--rc geninfo_all_blocks=1
00:17:55.337  		--rc geninfo_unexecuted_blocks=1
00:17:55.337  		
00:17:55.337  		'
00:17:55.337   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:17:55.337     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:17:55.337     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:17:55.337    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:17:55.338     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob
00:17:55.338     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:17:55.338     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:17:55.338     23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:17:55.338      23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:55.338      23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:55.338      23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:55.338      23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH
00:17:55.338      23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:17:55.338  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=b5bbd768-f9fc-48bf-98a2-9b460e1fb5c2
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c33c8720-dd7c-407a-ac00-cca9df74b521
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f898e9c1-4dc0-4c09-bc19-d4460ecb7255
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:17:55.338    23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable
00:17:55.338   23:59:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=()
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=()
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=()
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=()
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=()
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=()
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=()
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:18:01.908  Found 0000:af:00.0 (0x8086 - 0x159b)
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:18:01.908   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:18:01.909  Found 0000:af:00.1 (0x8086 - 0x159b)
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:18:01.909  Found net devices under 0000:af:00.0: cvl_0_0
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:18:01.909  Found net devices under 0000:af:00.1: cvl_0_1
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:18:01.909   23:59:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:18:01.909  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:18:01.909  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms
00:18:01.909  
00:18:01.909  --- 10.0.0.2 ping statistics ---
00:18:01.909  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:01.909  rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:18:01.909  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:18:01.909  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms
00:18:01.909  
00:18:01.909  --- 10.0.0.1 ping statistics ---
00:18:01.909  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:01.909  rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=3045473
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 3045473
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3045473 ']'
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:01.909   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:01.909  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:01.910   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:01.910   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:18:01.910  [2024-12-09 23:59:17.111036] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:18:01.910  [2024-12-09 23:59:17.111078] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:01.910  [2024-12-09 23:59:17.186127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:01.910  [2024-12-09 23:59:17.225979] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:01.910  [2024-12-09 23:59:17.226015] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:01.910  [2024-12-09 23:59:17.226023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:18:01.910  [2024-12-09 23:59:17.226029] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:18:01.910  [2024-12-09 23:59:17.226035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:01.910  [2024-12-09 23:59:17.226525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:01.910   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:01.910   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0
00:18:01.910   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:18:01.910   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:01.910   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:18:01.910   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:01.910   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:18:01.910  [2024-12-09 23:59:17.530945] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:01.910   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64
00:18:01.910   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512
00:18:01.910   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:18:01.910  Malloc1
00:18:01.910   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2
00:18:02.169  Malloc2
00:18:02.169   23:59:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:18:02.427   23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1
00:18:02.742   23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:18:03.026  [2024-12-09 23:59:18.552577] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:18:03.026   23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect
00:18:03.026   23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f898e9c1-4dc0-4c09-bc19-d4460ecb7255 -a 10.0.0.2 -s 4420 -i 4
00:18:03.026   23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME
00:18:03.026   23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0
00:18:03.026   23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:18:03.026   23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:18:03.026   23:59:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2
00:18:05.017   23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:18:05.017    23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:18:05.017    23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:18:05.017   23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:18:05.017   23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:18:05.017   23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0
00:18:05.017    23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json
00:18:05.017    23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name'
00:18:05.017   23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0
00:18:05.017   23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]]
00:18:05.017   23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1
00:18:05.017   23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:05.017   23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:18:05.017  [   0]:0x1
00:18:05.017    23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:18:05.017    23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:05.017   23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8ddc0c12d754ab2acaa50758edd61c9
00:18:05.018   23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8ddc0c12d754ab2acaa50758edd61c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:05.018   23:59:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2
00:18:05.276   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1
00:18:05.276   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:05.276   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:18:05.276  [   0]:0x1
00:18:05.276    23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:18:05.276    23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:05.276   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8ddc0c12d754ab2acaa50758edd61c9
00:18:05.276   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8ddc0c12d754ab2acaa50758edd61c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:05.277   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2
00:18:05.277   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:05.277   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:18:05.277  [   1]:0x2
00:18:05.277    23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:18:05.277    23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:05.277   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=68266e7d30984c88ba64bf36cf8b68cf
00:18:05.277   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 68266e7d30984c88ba64bf36cf8b68cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:05.277   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect
00:18:05.277   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:18:05.535  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:05.535   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:18:05.794   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible
00:18:06.053   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1
00:18:06.053   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f898e9c1-4dc0-4c09-bc19-d4460ecb7255 -a 10.0.0.2 -s 4420 -i 4
00:18:06.053   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1
00:18:06.053   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0
00:18:06.053   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:18:06.053   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]]
00:18:06.053   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1
00:18:06.053   23:59:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:18:08.588    23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:18:08.588    23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0
00:18:08.588    23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json
00:18:08.588    23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name'
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]]
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:08.588    23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:08.588   23:59:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:18:08.588    23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:18:08.588    23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:18:08.588  [   0]:0x2
00:18:08.588    23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:18:08.588    23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=68266e7d30984c88ba64bf36cf8b68cf
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 68266e7d30984c88ba64bf36cf8b68cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:18:08.588  [   0]:0x1
00:18:08.588    23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:18:08.588    23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8ddc0c12d754ab2acaa50758edd61c9
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8ddc0c12d754ab2acaa50758edd61c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:18:08.588   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:08.588  [   1]:0x2
00:18:08.588    23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:18:08.588    23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:08.869   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=68266e7d30984c88ba64bf36cf8b68cf
00:18:08.869   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 68266e7d30984c88ba64bf36cf8b68cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:08.869   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:08.870    23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:18:08.870    23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:18:08.870    23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:08.870   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:18:08.870  [   0]:0x2
00:18:09.129    23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:18:09.129    23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:09.129   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=68266e7d30984c88ba64bf36cf8b68cf
00:18:09.129   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 68266e7d30984c88ba64bf36cf8b68cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:09.129   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect
00:18:09.129   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:18:09.129  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:09.129   23:59:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:18:09.387   23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2
00:18:09.387   23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f898e9c1-4dc0-4c09-bc19-d4460ecb7255 -a 10.0.0.2 -s 4420 -i 4
00:18:09.387   23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2
00:18:09.387   23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0
00:18:09.387   23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:18:09.387   23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]]
00:18:09.387   23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2
00:18:09.387   23:59:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name'
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]]
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:18:11.921  [   0]:0x1
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e8ddc0c12d754ab2acaa50758edd61c9
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e8ddc0c12d754ab2acaa50758edd61c9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:18:11.921  [   1]:0x2
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=68266e7d30984c88ba64bf36cf8b68cf
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 68266e7d30984c88ba64bf36cf8b68cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:18:11.921  [   0]:0x2
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=68266e7d30984c88ba64bf36cf8b68cf
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 68266e7d30984c88ba64bf36cf8b68cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:11.921    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:18:11.921   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1
00:18:12.180  [2024-12-09 23:59:27.855040] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2
00:18:12.180  request:
00:18:12.180  {
00:18:12.180    "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:12.180    "nsid": 2,
00:18:12.180    "host": "nqn.2016-06.io.spdk:host1",
00:18:12.180    "method": "nvmf_ns_remove_host",
00:18:12.180    "req_id": 1
00:18:12.180  }
00:18:12.180  Got JSON-RPC error response
00:18:12.180  response:
00:18:12.180  {
00:18:12.180    "code": -32602,
00:18:12.180    "message": "Invalid parameters"
00:18:12.180  }
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:12.180    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1
00:18:12.180    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json
00:18:12.180    23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:12.180   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000
00:18:12.181   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:12.181   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:18:12.181   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:12.181   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:12.181   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:12.181   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2
00:18:12.181   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0
00:18:12.181   23:59:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2
00:18:12.181  [   0]:0x2
00:18:12.181    23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json
00:18:12.181    23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=68266e7d30984c88ba64bf36cf8b68cf
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 68266e7d30984c88ba64bf36cf8b68cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]]
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:18:12.440  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3047449
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3047449 /var/tmp/host.sock
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3047449 ']'
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...'
00:18:12.440  Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:12.440   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:18:12.440  [2024-12-09 23:59:28.226308] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:18:12.440  [2024-12-09 23:59:28.226355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3047449 ]
00:18:12.698  [2024-12-09 23:59:28.301519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:18:12.698  [2024-12-09 23:59:28.340467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:18:12.698   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:12.698   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0
00:18:12.698   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:18:12.956   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:18:13.215    23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid b5bbd768-f9fc-48bf-98a2-9b460e1fb5c2
00:18:13.215    23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:18:13.215   23:59:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B5BBD768F9FC48BF98A29B460E1FB5C2 -i
00:18:13.473    23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c33c8720-dd7c-407a-ac00-cca9df74b521
00:18:13.473    23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:18:13.473   23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C33C8720DD7C407AAC00CCA9DF74B521 -i
00:18:13.732   23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1
00:18:13.732   23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2
00:18:13.991   23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0
00:18:13.991   23:59:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0
00:18:14.250  nvme0n1
00:18:14.250   23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1
00:18:14.250   23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1
00:18:14.509  nvme1n2
00:18:14.509    23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs
00:18:14.509    23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs
00:18:14.509    23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name'
00:18:14.509    23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort
00:18:14.509    23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs
00:18:14.767   23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]]
00:18:14.767    23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1
00:18:14.767    23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid'
00:18:14.767    23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1
00:18:15.026   23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ b5bbd768-f9fc-48bf-98a2-9b460e1fb5c2 == \b\5\b\b\d\7\6\8\-\f\9\f\c\-\4\8\b\f\-\9\8\a\2\-\9\b\4\6\0\e\1\f\b\5\c\2 ]]
00:18:15.026    23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2
00:18:15.026    23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid'
00:18:15.026    23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2
00:18:15.285   23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c33c8720-dd7c-407a-ac00-cca9df74b521 == \c\3\3\c\8\7\2\0\-\d\d\7\c\-\4\0\7\a\-\a\c\0\0\-\c\c\a\9\d\f\7\4\b\5\2\1 ]]
00:18:15.285   23:59:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:18:15.285   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:18:15.544    23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid b5bbd768-f9fc-48bf-98a2-9b460e1fb5c2
00:18:15.544    23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:18:15.544   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B5BBD768F9FC48BF98A29B460E1FB5C2
00:18:15.544   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0
00:18:15.544   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B5BBD768F9FC48BF98A29B460E1FB5C2
00:18:15.544   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:18:15.544   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:15.544    23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:18:15.544   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:15.544    23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:18:15.544   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:18:15.544   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:18:15.544   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:18:15.544   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B5BBD768F9FC48BF98A29B460E1FB5C2
00:18:15.803  [2024-12-09 23:59:31.480970] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid
00:18:15.803  [2024-12-09 23:59:31.481001] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19
00:18:15.803  [2024-12-09 23:59:31.481009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:18:15.803  request:
00:18:15.803  {
00:18:15.803    "nqn": "nqn.2016-06.io.spdk:cnode1",
00:18:15.803    "namespace": {
00:18:15.803      "bdev_name": "invalid",
00:18:15.803      "nsid": 1,
00:18:15.803      "nguid": "B5BBD768F9FC48BF98A29B460E1FB5C2",
00:18:15.803      "no_auto_visible": false,
00:18:15.803      "hide_metadata": false
00:18:15.803    },
00:18:15.803    "method": "nvmf_subsystem_add_ns",
00:18:15.803    "req_id": 1
00:18:15.803  }
00:18:15.803  Got JSON-RPC error response
00:18:15.803  response:
00:18:15.803  {
00:18:15.803    "code": -32602,
00:18:15.803    "message": "Invalid parameters"
00:18:15.803  }
00:18:15.803   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1
00:18:15.803   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:18:15.803   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:18:15.803   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:18:15.803    23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid b5bbd768-f9fc-48bf-98a2-9b460e1fb5c2
00:18:15.803    23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d -
00:18:15.803   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B5BBD768F9FC48BF98A29B460E1FB5C2 -i
00:18:16.062   23:59:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s
00:18:17.965    23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs
00:18:17.965    23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length
00:18:17.965    23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs
00:18:18.224   23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 ))
00:18:18.224   23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3047449
00:18:18.224   23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3047449 ']'
00:18:18.224   23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3047449
00:18:18.224    23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname
00:18:18.224   23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:18.224    23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3047449
00:18:18.224   23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:18:18.224   23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:18:18.225   23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3047449'
00:18:18.225  killing process with pid 3047449
00:18:18.225   23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3047449
00:18:18.225   23:59:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3047449
00:18:18.483   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20}
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:18:18.743  rmmod nvme_tcp
00:18:18.743  rmmod nvme_fabrics
00:18:18.743  rmmod nvme_keyring
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 3045473 ']'
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 3045473
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3045473 ']'
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3045473
00:18:18.743    23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:18.743    23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3045473
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3045473'
00:18:18.743  killing process with pid 3045473
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3045473
00:18:18.743   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3045473
00:18:19.002   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:18:19.002   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:18:19.002   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:18:19.002   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr
00:18:19.002   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save
00:18:19.002   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:18:19.002   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore
00:18:19.002   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:18:19.002   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns
00:18:19.002   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:18:19.002   23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:18:19.002    23:59:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:18:21.539   23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:18:21.539  
00:18:21.539  real	0m25.894s
00:18:21.539  user	0m30.816s
00:18:21.539  sys	0m7.040s
00:18:21.539   23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:21.539   23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x
00:18:21.539  ************************************
00:18:21.539  END TEST nvmf_ns_masking
00:18:21.539  ************************************
00:18:21.539   23:59:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]]
00:18:21.539   23:59:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp
00:18:21.539   23:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:21.539   23:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:21.539   23:59:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:18:21.539  ************************************
00:18:21.539  START TEST nvmf_nvme_cli
00:18:21.539  ************************************
00:18:21.539   23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp
00:18:21.539  * Looking for test storage...
00:18:21.539  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:18:21.539    23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:18:21.539     23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version
00:18:21.539     23:59:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-:
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-:
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<'
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:21.539     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1
00:18:21.539     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1
00:18:21.539     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:21.539     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1
00:18:21.539     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2
00:18:21.539     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2
00:18:21.539     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:21.539     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0
00:18:21.539    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:18:21.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:21.540  		--rc genhtml_branch_coverage=1
00:18:21.540  		--rc genhtml_function_coverage=1
00:18:21.540  		--rc genhtml_legend=1
00:18:21.540  		--rc geninfo_all_blocks=1
00:18:21.540  		--rc geninfo_unexecuted_blocks=1
00:18:21.540  		
00:18:21.540  		'
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:18:21.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:21.540  		--rc genhtml_branch_coverage=1
00:18:21.540  		--rc genhtml_function_coverage=1
00:18:21.540  		--rc genhtml_legend=1
00:18:21.540  		--rc geninfo_all_blocks=1
00:18:21.540  		--rc geninfo_unexecuted_blocks=1
00:18:21.540  		
00:18:21.540  		'
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:18:21.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:21.540  		--rc genhtml_branch_coverage=1
00:18:21.540  		--rc genhtml_function_coverage=1
00:18:21.540  		--rc genhtml_legend=1
00:18:21.540  		--rc geninfo_all_blocks=1
00:18:21.540  		--rc geninfo_unexecuted_blocks=1
00:18:21.540  		
00:18:21.540  		'
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:18:21.540  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:21.540  		--rc genhtml_branch_coverage=1
00:18:21.540  		--rc genhtml_function_coverage=1
00:18:21.540  		--rc genhtml_legend=1
00:18:21.540  		--rc geninfo_all_blocks=1
00:18:21.540  		--rc geninfo_unexecuted_blocks=1
00:18:21.540  		
00:18:21.540  		'
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:18:21.540     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:18:21.540     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:18:21.540     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob
00:18:21.540     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:18:21.540     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:18:21.540     23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:18:21.540      23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:21.540      23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:21.540      23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:21.540      23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH
00:18:21.540      23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:18:21.540  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=()
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:18:21.540    23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable
00:18:21.540   23:59:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=()
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=()
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=()
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=()
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=()
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=()
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=()
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:18:28.135  Found 0000:af:00.0 (0x8086 - 0x159b)
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:18:28.135  Found 0000:af:00.1 (0x8086 - 0x159b)
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:18:28.135   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]]
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:18:28.136  Found net devices under 0000:af:00.0: cvl_0_0
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]]
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:18:28.136  Found net devices under 0000:af:00.1: cvl_0_1
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:18:28.136  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:18:28.136  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms
00:18:28.136  
00:18:28.136  --- 10.0.0.2 ping statistics ---
00:18:28.136  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:28.136  rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:18:28.136  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:18:28.136  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms
00:18:28.136  
00:18:28.136  --- 10.0.0.1 ping statistics ---
00:18:28.136  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:18:28.136  rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:18:28.136   23:59:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:18:28.136   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF
00:18:28.136   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:18:28.136   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable
00:18:28.136   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:28.136   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=3051895
00:18:28.136   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 3051895
00:18:28.136   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:18:28.136   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3051895 ']'
00:18:28.136   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:28.136   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:28.136   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:28.136  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:28.136   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:28.136   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:28.136  [2024-12-09 23:59:43.069088] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:18:28.136  [2024-12-09 23:59:43.069132] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:28.136  [2024-12-09 23:59:43.146458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:18:28.136  [2024-12-09 23:59:43.186515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:28.136  [2024-12-09 23:59:43.186553] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:28.136  [2024-12-09 23:59:43.186561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:18:28.136  [2024-12-09 23:59:43.186566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:18:28.136  [2024-12-09 23:59:43.186571] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:28.137  [2024-12-09 23:59:43.187837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:18:28.137  [2024-12-09 23:59:43.187947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:18:28.137  [2024-12-09 23:59:43.188052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:28.137  [2024-12-09 23:59:43.188052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:28.137  [2024-12-09 23:59:43.337655] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:28.137  Malloc0
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:28.137  Malloc1
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:28.137  [2024-12-09 23:59:43.439129] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420
00:18:28.137  
00:18:28.137  Discovery Log Number of Records 2, Generation counter 2
00:18:28.137  =====Discovery Log Entry 0======
00:18:28.137  trtype:  tcp
00:18:28.137  adrfam:  ipv4
00:18:28.137  subtype: current discovery subsystem
00:18:28.137  treq:    not required
00:18:28.137  portid:  0
00:18:28.137  trsvcid: 4420
00:18:28.137  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:18:28.137  traddr:  10.0.0.2
00:18:28.137  eflags:  explicit discovery connections, duplicate discovery information
00:18:28.137  sectype: none
00:18:28.137  =====Discovery Log Entry 1======
00:18:28.137  trtype:  tcp
00:18:28.137  adrfam:  ipv4
00:18:28.137  subtype: nvme subsystem
00:18:28.137  treq:    not required
00:18:28.137  portid:  0
00:18:28.137  trsvcid: 4420
00:18:28.137  subnqn:  nqn.2016-06.io.spdk:cnode1
00:18:28.137  traddr:  10.0.0.2
00:18:28.137  eflags:  none
00:18:28.137  sectype: none
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs))
00:18:28.137    23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs
00:18:28.137    23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _
00:18:28.137    23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:18:28.137     23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list
00:18:28.137    23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]]
00:18:28.137    23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:18:28.137    23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]]
00:18:28.137    23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0
00:18:28.137   23:59:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:18:29.073   23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2
00:18:29.073   23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0
00:18:29.073   23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:18:29.073   23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]]
00:18:29.073   23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2
00:18:29.073   23:59:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2
00:18:30.975   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:18:30.975   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2
00:18:30.975   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:18:30.975   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:18:30.975     23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]]
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]]
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]]
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]]
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:18:30.975   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1
00:18:30.975  /dev/nvme0n2 ]]
00:18:30.975   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs))
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _
00:18:30.975    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:18:30.975     23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list
00:18:31.235    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]]
00:18:31.235    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:18:31.235    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]]
00:18:31.235    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:18:31.235    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]]
00:18:31.235    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1
00:18:31.235    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:18:31.235    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]]
00:18:31.235    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2
00:18:31.235    23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:18:31.235  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection ))
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20}
00:18:31.235   23:59:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:18:31.235  rmmod nvme_tcp
00:18:31.235  rmmod nvme_fabrics
00:18:31.235  rmmod nvme_keyring
00:18:31.235   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:18:31.235   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e
00:18:31.235   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0
00:18:31.235   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 3051895 ']'
00:18:31.235   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 3051895
00:18:31.235   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3051895 ']'
00:18:31.235   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3051895
00:18:31.235    23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname
00:18:31.235   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:18:31.235    23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3051895
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3051895'
00:18:31.494  killing process with pid 3051895
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3051895
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3051895
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:18:31.494   23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:18:31.494    23:59:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:18:34.030   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:18:34.030  
00:18:34.030  real	0m12.465s
00:18:34.030  user	0m18.050s
00:18:34.030  sys	0m5.060s
00:18:34.030   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable
00:18:34.030   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x
00:18:34.030  ************************************
00:18:34.030  END TEST nvmf_nvme_cli
00:18:34.030  ************************************
00:18:34.030   23:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]]
00:18:34.030   23:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp
00:18:34.030   23:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:18:34.030   23:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:18:34.030   23:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:18:34.030  ************************************
00:18:34.030  START TEST nvmf_vfio_user
00:18:34.030  ************************************
00:18:34.030   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp
00:18:34.030  * Looking for test storage...
00:18:34.030  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:18:34.030     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version
00:18:34.030     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-:
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-:
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<'
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 ))
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:18:34.030     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1
00:18:34.030     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1
00:18:34.030     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:18:34.030     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1
00:18:34.030     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2
00:18:34.030     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2
00:18:34.030     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:18:34.030     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0
00:18:34.030    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:18:34.031  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:34.031  		--rc genhtml_branch_coverage=1
00:18:34.031  		--rc genhtml_function_coverage=1
00:18:34.031  		--rc genhtml_legend=1
00:18:34.031  		--rc geninfo_all_blocks=1
00:18:34.031  		--rc geninfo_unexecuted_blocks=1
00:18:34.031  		
00:18:34.031  		'
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:18:34.031  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:34.031  		--rc genhtml_branch_coverage=1
00:18:34.031  		--rc genhtml_function_coverage=1
00:18:34.031  		--rc genhtml_legend=1
00:18:34.031  		--rc geninfo_all_blocks=1
00:18:34.031  		--rc geninfo_unexecuted_blocks=1
00:18:34.031  		
00:18:34.031  		'
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:18:34.031  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:34.031  		--rc genhtml_branch_coverage=1
00:18:34.031  		--rc genhtml_function_coverage=1
00:18:34.031  		--rc genhtml_legend=1
00:18:34.031  		--rc geninfo_all_blocks=1
00:18:34.031  		--rc geninfo_unexecuted_blocks=1
00:18:34.031  		
00:18:34.031  		'
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:18:34.031  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:18:34.031  		--rc genhtml_branch_coverage=1
00:18:34.031  		--rc genhtml_function_coverage=1
00:18:34.031  		--rc genhtml_legend=1
00:18:34.031  		--rc geninfo_all_blocks=1
00:18:34.031  		--rc geninfo_unexecuted_blocks=1
00:18:34.031  		
00:18:34.031  		'
00:18:34.031   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:18:34.031     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:18:34.031     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:18:34.031     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob
00:18:34.031     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:18:34.031     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:18:34.031     23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:18:34.031      23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:34.031      23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:34.031      23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:34.031      23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH
00:18:34.031      23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:18:34.031    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:18:34.031  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:18:34.032    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:18:34.032    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:18:34.032    23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' ''
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args=
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3053116
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3053116'
00:18:34.032  Process pid: 3053116
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3053116
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]'
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3053116 ']'
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:18:34.032  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable
00:18:34.032   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x
00:18:34.032  [2024-12-09 23:59:49.706982] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:18:34.032  [2024-12-09 23:59:49.707030] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:18:34.032  [2024-12-09 23:59:49.779362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:18:34.032  [2024-12-09 23:59:49.821099] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:18:34.032  [2024-12-09 23:59:49.821134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:18:34.032  [2024-12-09 23:59:49.821141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:18:34.032  [2024-12-09 23:59:49.821147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:18:34.032  [2024-12-09 23:59:49.821152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:18:34.032  [2024-12-09 23:59:49.822639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:18:34.032  [2024-12-09 23:59:49.822745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:18:34.032  [2024-12-09 23:59:49.822851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:18:34.032  [2024-12-09 23:59:49.822853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:18:34.290   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:18:34.290   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0
00:18:34.290   23:59:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1
00:18:35.223   23:59:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER
00:18:35.482   23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user
00:18:35.482    23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2
00:18:35.482   23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:18:35.482   23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1
00:18:35.482   23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:18:35.482  Malloc1
00:18:35.741   23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1
00:18:35.741   23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1
00:18:35.999   23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0
00:18:36.258   23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:18:36.258   23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2
00:18:36.258   23:59:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2
00:18:36.517  Malloc2
00:18:36.517   23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2
00:18:36.517   23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2
00:18:36.776   23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0
00:18:37.036   23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user
00:18:37.036    23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2
00:18:37.036   23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES)
00:18:37.036   23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1
00:18:37.036   23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1
00:18:37.036   23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci
00:18:37.036  [2024-12-09 23:59:52.798192] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:18:37.036  [2024-12-09 23:59:52.798237] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3053707 ]
00:18:37.036  [2024-12-09 23:59:52.837660] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1
00:18:37.036  [2024-12-09 23:59:52.843022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32
00:18:37.036  [2024-12-09 23:59:52.843043] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f35eafac000
00:18:37.036  [2024-12-09 23:59:52.844019] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:18:37.036  [2024-12-09 23:59:52.845019] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:18:37.036  [2024-12-09 23:59:52.846025] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:18:37.036  [2024-12-09 23:59:52.847026] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0
00:18:37.036  [2024-12-09 23:59:52.848030] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:18:37.036  [2024-12-09 23:59:52.849039] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:18:37.036  [2024-12-09 23:59:52.850048] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:18:37.036  [2024-12-09 23:59:52.851054] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:18:37.036  [2024-12-09 23:59:52.852061] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32
00:18:37.036  [2024-12-09 23:59:52.852073] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f35eafa1000
00:18:37.036  [2024-12-09 23:59:52.852988] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:18:37.036  [2024-12-09 23:59:52.862434] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully
00:18:37.036  [2024-12-09 23:59:52.862456] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout)
00:18:37.036  [2024-12-09 23:59:52.868156] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff
00:18:37.036  [2024-12-09 23:59:52.868194] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192
00:18:37.036  [2024-12-09 23:59:52.868266] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout)
00:18:37.036  [2024-12-09 23:59:52.868281] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout)
00:18:37.036  [2024-12-09 23:59:52.868286] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout)
00:18:37.036  [2024-12-09 23:59:52.869149] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300
00:18:37.036  [2024-12-09 23:59:52.869159] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout)
00:18:37.036  [2024-12-09 23:59:52.869169] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout)
00:18:37.036  [2024-12-09 23:59:52.870157] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff
00:18:37.036  [2024-12-09 23:59:52.870164] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout)
00:18:37.036  [2024-12-09 23:59:52.870176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms)
00:18:37.036  [2024-12-09 23:59:52.871163] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0
00:18:37.036  [2024-12-09 23:59:52.871177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:18:37.036  [2024-12-09 23:59:52.872171] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0
00:18:37.036  [2024-12-09 23:59:52.872178] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0
00:18:37.036  [2024-12-09 23:59:52.872182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms)
00:18:37.036  [2024-12-09 23:59:52.872189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:18:37.036  [2024-12-09 23:59:52.872295] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1
00:18:37.037  [2024-12-09 23:59:52.872300] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:18:37.037  [2024-12-09 23:59:52.872304] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000
00:18:37.037  [2024-12-09 23:59:52.873177] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000
00:18:37.037  [2024-12-09 23:59:52.874181] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff
00:18:37.037  [2024-12-09 23:59:52.875187] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001
00:18:37.037  [2024-12-09 23:59:52.876186] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:18:37.037  [2024-12-09 23:59:52.876248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:18:37.037  [2024-12-09 23:59:52.877197] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1
00:18:37.037  [2024-12-09 23:59:52.877205] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:18:37.037  [2024-12-09 23:59:52.877209] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877225] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout)
00:18:37.037  [2024-12-09 23:59:52.877232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877248] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:18:37.037  [2024-12-09 23:59:52.877252] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:18:37.037  [2024-12-09 23:59:52.877256] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:18:37.037  [2024-12-09 23:59:52.877267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:18:37.037  [2024-12-09 23:59:52.877305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0
00:18:37.037  [2024-12-09 23:59:52.877314] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072
00:18:37.037  [2024-12-09 23:59:52.877318] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072
00:18:37.037  [2024-12-09 23:59:52.877322] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001
00:18:37.037  [2024-12-09 23:59:52.877326] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000
00:18:37.037  [2024-12-09 23:59:52.877331] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1
00:18:37.037  [2024-12-09 23:59:52.877335] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1
00:18:37.037  [2024-12-09 23:59:52.877339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877345] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877354] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0
00:18:37.037  [2024-12-09 23:59:52.877367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0
00:18:37.037  [2024-12-09 23:59:52.877377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:37.037  [2024-12-09 23:59:52.877386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:37.037  [2024-12-09 23:59:52.877393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:37.037  [2024-12-09 23:59:52.877401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:18:37.037  [2024-12-09 23:59:52.877405] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877420] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0
00:18:37.037  [2024-12-09 23:59:52.877426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0
00:18:37.037  [2024-12-09 23:59:52.877431] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms
00:18:37.037  [2024-12-09 23:59:52.877435] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877455] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:18:37.037  [2024-12-09 23:59:52.877465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0
00:18:37.037  [2024-12-09 23:59:52.877512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877526] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096
00:18:37.037  [2024-12-09 23:59:52.877530] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000
00:18:37.037  [2024-12-09 23:59:52.877533] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:18:37.037  [2024-12-09 23:59:52.877538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0
00:18:37.037  [2024-12-09 23:59:52.877548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0
00:18:37.037  [2024-12-09 23:59:52.877558] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added
00:18:37.037  [2024-12-09 23:59:52.877565] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877572] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877578] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:18:37.037  [2024-12-09 23:59:52.877581] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:18:37.037  [2024-12-09 23:59:52.877586] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:18:37.037  [2024-12-09 23:59:52.877591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:18:37.037  [2024-12-09 23:59:52.877612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0
00:18:37.037  [2024-12-09 23:59:52.877621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877628] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877634] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:18:37.037  [2024-12-09 23:59:52.877638] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:18:37.037  [2024-12-09 23:59:52.877641] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:18:37.037  [2024-12-09 23:59:52.877646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:18:37.037  [2024-12-09 23:59:52.877657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0
00:18:37.037  [2024-12-09 23:59:52.877665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms)
00:18:37.037  [2024-12-09 23:59:52.877671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms)
00:18:37.038  [2024-12-09 23:59:52.877677] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms)
00:18:37.038  [2024-12-09 23:59:52.877682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms)
00:18:37.038  [2024-12-09 23:59:52.877686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms)
00:18:37.038  [2024-12-09 23:59:52.877691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms)
00:18:37.038  [2024-12-09 23:59:52.877695] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID
00:18:37.038  [2024-12-09 23:59:52.877699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms)
00:18:37.038  [2024-12-09 23:59:52.877704] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout)
00:18:37.038  [2024-12-09 23:59:52.877718] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0
00:18:37.038  [2024-12-09 23:59:52.877727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0
00:18:37.038  [2024-12-09 23:59:52.877737] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0
00:18:37.038  [2024-12-09 23:59:52.877745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0
00:18:37.038  [2024-12-09 23:59:52.877755] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0
00:18:37.038  [2024-12-09 23:59:52.877765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0
00:18:37.038  [2024-12-09 23:59:52.877776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:18:37.038  [2024-12-09 23:59:52.877786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0
00:18:37.038  [2024-12-09 23:59:52.877797] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192
00:18:37.038  [2024-12-09 23:59:52.877801] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000
00:18:37.038  [2024-12-09 23:59:52.877805] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000
00:18:37.038  [2024-12-09 23:59:52.877808] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000
00:18:37.038  [2024-12-09 23:59:52.877811] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2
00:18:37.038  [2024-12-09 23:59:52.877816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000
00:18:37.038  [2024-12-09 23:59:52.877822] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512
00:18:37.038  [2024-12-09 23:59:52.877826] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000
00:18:37.038  [2024-12-09 23:59:52.877829] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:18:37.038  [2024-12-09 23:59:52.877835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0
00:18:37.038  [2024-12-09 23:59:52.877841] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512
00:18:37.038  [2024-12-09 23:59:52.877844] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:18:37.038  [2024-12-09 23:59:52.877847] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:18:37.038  [2024-12-09 23:59:52.877853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:18:37.038  [2024-12-09 23:59:52.877859] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096
00:18:37.038  [2024-12-09 23:59:52.877863] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000
00:18:37.038  [2024-12-09 23:59:52.877866] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:18:37.038  [2024-12-09 23:59:52.877871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0
00:18:37.038  [2024-12-09 23:59:52.877877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0
00:18:37.038  [2024-12-09 23:59:52.877888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0
00:18:37.038  [2024-12-09 23:59:52.877897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0
00:18:37.038  [2024-12-09 23:59:52.877904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0
00:18:37.038  =====================================================
00:18:37.038  NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:18:37.038  =====================================================
00:18:37.038  Controller Capabilities/Features
00:18:37.038  ================================
00:18:37.038  Vendor ID:                             4e58
00:18:37.038  Subsystem Vendor ID:                   4e58
00:18:37.038  Serial Number:                         SPDK1
00:18:37.038  Model Number:                          SPDK bdev Controller
00:18:37.038  Firmware Version:                      25.01
00:18:37.038  Recommended Arb Burst:                 6
00:18:37.038  IEEE OUI Identifier:                   8d 6b 50
00:18:37.038  Multi-path I/O
00:18:37.038    May have multiple subsystem ports:   Yes
00:18:37.038    May have multiple controllers:       Yes
00:18:37.038    Associated with SR-IOV VF:           No
00:18:37.038  Max Data Transfer Size:                131072
00:18:37.038  Max Number of Namespaces:              32
00:18:37.038  Max Number of I/O Queues:              127
00:18:37.038  NVMe Specification Version (VS):       1.3
00:18:37.038  NVMe Specification Version (Identify): 1.3
00:18:37.038  Maximum Queue Entries:                 256
00:18:37.038  Contiguous Queues Required:            Yes
00:18:37.038  Arbitration Mechanisms Supported
00:18:37.038    Weighted Round Robin:                Not Supported
00:18:37.038    Vendor Specific:                     Not Supported
00:18:37.038  Reset Timeout:                         15000 ms
00:18:37.038  Doorbell Stride:                       4 bytes
00:18:37.038  NVM Subsystem Reset:                   Not Supported
00:18:37.038  Command Sets Supported
00:18:37.038    NVM Command Set:                     Supported
00:18:37.038  Boot Partition:                        Not Supported
00:18:37.038  Memory Page Size Minimum:              4096 bytes
00:18:37.038  Memory Page Size Maximum:              4096 bytes
00:18:37.038  Persistent Memory Region:              Not Supported
00:18:37.038  Optional Asynchronous Events Supported
00:18:37.038    Namespace Attribute Notices:         Supported
00:18:37.038    Firmware Activation Notices:         Not Supported
00:18:37.038    ANA Change Notices:                  Not Supported
00:18:37.038    PLE Aggregate Log Change Notices:    Not Supported
00:18:37.038    LBA Status Info Alert Notices:       Not Supported
00:18:37.038    EGE Aggregate Log Change Notices:    Not Supported
00:18:37.038    Normal NVM Subsystem Shutdown event: Not Supported
00:18:37.038    Zone Descriptor Change Notices:      Not Supported
00:18:37.038    Discovery Log Change Notices:        Not Supported
00:18:37.038  Controller Attributes
00:18:37.038    128-bit Host Identifier:             Supported
00:18:37.038    Non-Operational Permissive Mode:     Not Supported
00:18:37.038    NVM Sets:                            Not Supported
00:18:37.038    Read Recovery Levels:                Not Supported
00:18:37.038    Endurance Groups:                    Not Supported
00:18:37.038    Predictable Latency Mode:            Not Supported
00:18:37.038    Traffic Based Keep ALive:            Not Supported
00:18:37.038    Namespace Granularity:               Not Supported
00:18:37.038    SQ Associations:                     Not Supported
00:18:37.038    UUID List:                           Not Supported
00:18:37.038    Multi-Domain Subsystem:              Not Supported
00:18:37.038    Fixed Capacity Management:           Not Supported
00:18:37.038    Variable Capacity Management:        Not Supported
00:18:37.038    Delete Endurance Group:              Not Supported
00:18:37.038    Delete NVM Set:                      Not Supported
00:18:37.038    Extended LBA Formats Supported:      Not Supported
00:18:37.038    Flexible Data Placement Supported:   Not Supported
00:18:37.038  
00:18:37.038  Controller Memory Buffer Support
00:18:37.038  ================================
00:18:37.038  Supported:                             No
00:18:37.038  
00:18:37.038  Persistent Memory Region Support
00:18:37.038  ================================
00:18:37.038  Supported:                             No
00:18:37.038  
00:18:37.038  Admin Command Set Attributes
00:18:37.038  ============================
00:18:37.038  Security Send/Receive:                 Not Supported
00:18:37.038  Format NVM:                            Not Supported
00:18:37.038  Firmware Activate/Download:            Not Supported
00:18:37.038  Namespace Management:                  Not Supported
00:18:37.038  Device Self-Test:                      Not Supported
00:18:37.038  Directives:                            Not Supported
00:18:37.039  NVMe-MI:                               Not Supported
00:18:37.039  Virtualization Management:             Not Supported
00:18:37.039  Doorbell Buffer Config:                Not Supported
00:18:37.039  Get LBA Status Capability:             Not Supported
00:18:37.039  Command & Feature Lockdown Capability: Not Supported
00:18:37.039  Abort Command Limit:                   4
00:18:37.039  Async Event Request Limit:             4
00:18:37.039  Number of Firmware Slots:              N/A
00:18:37.039  Firmware Slot 1 Read-Only:             N/A
00:18:37.039  Firmware Activation Without Reset:     N/A
00:18:37.039  Multiple Update Detection Support:     N/A
00:18:37.039  Firmware Update Granularity:           No Information Provided
00:18:37.039  Per-Namespace SMART Log:               No
00:18:37.039  Asymmetric Namespace Access Log Page:  Not Supported
00:18:37.039  Subsystem NQN:                         nqn.2019-07.io.spdk:cnode1
00:18:37.039  Command Effects Log Page:              Supported
00:18:37.039  Get Log Page Extended Data:            Supported
00:18:37.039  Telemetry Log Pages:                   Not Supported
00:18:37.039  Persistent Event Log Pages:            Not Supported
00:18:37.039  Supported Log Pages Log Page:          May Support
00:18:37.039  Commands Supported & Effects Log Page: Not Supported
00:18:37.039  Feature Identifiers & Effects Log Page:May Support
00:18:37.039  NVMe-MI Commands & Effects Log Page:   May Support
00:18:37.039  Data Area 4 for Telemetry Log:         Not Supported
00:18:37.039  Error Log Page Entries Supported:      128
00:18:37.039  Keep Alive:                            Supported
00:18:37.039  Keep Alive Granularity:                10000 ms
00:18:37.039  
00:18:37.039  NVM Command Set Attributes
00:18:37.039  ==========================
00:18:37.039  Submission Queue Entry Size
00:18:37.039    Max:                       64
00:18:37.039    Min:                       64
00:18:37.039  Completion Queue Entry Size
00:18:37.039    Max:                       16
00:18:37.039    Min:                       16
00:18:37.039  Number of Namespaces:        32
00:18:37.039  Compare Command:             Supported
00:18:37.039  Write Uncorrectable Command: Not Supported
00:18:37.039  Dataset Management Command:  Supported
00:18:37.039  Write Zeroes Command:        Supported
00:18:37.039  Set Features Save Field:     Not Supported
00:18:37.039  Reservations:                Not Supported
00:18:37.039  Timestamp:                   Not Supported
00:18:37.039  Copy:                        Supported
00:18:37.039  Volatile Write Cache:        Present
00:18:37.039  Atomic Write Unit (Normal):  1
00:18:37.039  Atomic Write Unit (PFail):   1
00:18:37.039  Atomic Compare & Write Unit: 1
00:18:37.039  Fused Compare & Write:       Supported
00:18:37.039  Scatter-Gather List
00:18:37.039    SGL Command Set:           Supported (Dword aligned)
00:18:37.039    SGL Keyed:                 Not Supported
00:18:37.039    SGL Bit Bucket Descriptor: Not Supported
00:18:37.039    SGL Metadata Pointer:      Not Supported
00:18:37.039    Oversized SGL:             Not Supported
00:18:37.039    SGL Metadata Address:      Not Supported
00:18:37.039    SGL Offset:                Not Supported
00:18:37.039    Transport SGL Data Block:  Not Supported
00:18:37.039  Replay Protected Memory Block:  Not Supported
00:18:37.039  
00:18:37.039  Firmware Slot Information
00:18:37.039  =========================
00:18:37.039  Active slot:                 1
00:18:37.039  Slot 1 Firmware Revision:    25.01
00:18:37.039  
00:18:37.039  
00:18:37.039  Commands Supported and Effects
00:18:37.039  ==============================
00:18:37.039  Admin Commands
00:18:37.039  --------------
00:18:37.039                    Get Log Page (02h): Supported 
00:18:37.039                        Identify (06h): Supported 
00:18:37.039                           Abort (08h): Supported 
00:18:37.039                    Set Features (09h): Supported 
00:18:37.039                    Get Features (0Ah): Supported 
00:18:37.039      Asynchronous Event Request (0Ch): Supported 
00:18:37.039                      Keep Alive (18h): Supported 
00:18:37.039  I/O Commands
00:18:37.039  ------------
00:18:37.039                           Flush (00h): Supported LBA-Change 
00:18:37.039                           Write (01h): Supported LBA-Change 
00:18:37.039                            Read (02h): Supported 
00:18:37.039                         Compare (05h): Supported 
00:18:37.039                    Write Zeroes (08h): Supported LBA-Change 
00:18:37.039              Dataset Management (09h): Supported LBA-Change 
00:18:37.039                            Copy (19h): Supported LBA-Change 
00:18:37.039  
00:18:37.039  Error Log
00:18:37.039  =========
00:18:37.039  
00:18:37.039  Arbitration
00:18:37.039  ===========
00:18:37.039  Arbitration Burst:           1
00:18:37.039  
00:18:37.039  Power Management
00:18:37.039  ================
00:18:37.039  Number of Power States:          1
00:18:37.039  Current Power State:             Power State #0
00:18:37.039  Power State #0:
00:18:37.039    Max Power:                      0.00 W
00:18:37.039    Non-Operational State:         Operational
00:18:37.039    Entry Latency:                 Not Reported
00:18:37.039    Exit Latency:                  Not Reported
00:18:37.039    Relative Read Throughput:      0
00:18:37.039    Relative Read Latency:         0
00:18:37.039    Relative Write Throughput:     0
00:18:37.039    Relative Write Latency:        0
00:18:37.039    Idle Power:                     Not Reported
00:18:37.039    Active Power:                   Not Reported
00:18:37.039  Non-Operational Permissive Mode: Not Supported
00:18:37.039  
00:18:37.039  Health Information
00:18:37.039  ==================
00:18:37.039  Critical Warnings:
00:18:37.039    Available Spare Space:     OK
00:18:37.039    Temperature:               OK
00:18:37.039    Device Reliability:        OK
00:18:37.039    Read Only:                 No
00:18:37.039    Volatile Memory Backup:    OK
00:18:37.039  Current Temperature:         0 Kelvin (-273 Celsius)
00:18:37.039  Temperature Threshold:       0 Kelvin (-273 Celsius)
00:18:37.039  Available Spare:             0%
00:18:37.039  Available Sp[2024-12-09 23:59:52.877982] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0
00:18:37.039  [2024-12-09 23:59:52.877996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0
00:18:37.039  [2024-12-09 23:59:52.878019] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD
00:18:37.039  [2024-12-09 23:59:52.878027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:37.039  [2024-12-09 23:59:52.878032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:37.039  [2024-12-09 23:59:52.878039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:37.039  [2024-12-09 23:59:52.878044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:18:37.039  [2024-12-09 23:59:52.878204] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001
00:18:37.039  [2024-12-09 23:59:52.878213] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001
00:18:37.039  [2024-12-09 23:59:52.879211] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:18:37.039  [2024-12-09 23:59:52.879257] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us
00:18:37.039  [2024-12-09 23:59:52.879264] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms
00:18:37.039  [2024-12-09 23:59:52.880214] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9
00:18:37.039  [2024-12-09 23:59:52.880224] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds
00:18:37.039  [2024-12-09 23:59:52.880270] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl
00:18:37.039  [2024-12-09 23:59:52.883176] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:18:37.298  are Threshold:   0%
00:18:37.298  Life Percentage Used:        0%
00:18:37.298  Data Units Read:             0
00:18:37.298  Data Units Written:          0
00:18:37.298  Host Read Commands:          0
00:18:37.298  Host Write Commands:         0
00:18:37.298  Controller Busy Time:        0 minutes
00:18:37.298  Power Cycles:                0
00:18:37.298  Power On Hours:              0 hours
00:18:37.299  Unsafe Shutdowns:            0
00:18:37.299  Unrecoverable Media Errors:  0
00:18:37.299  Lifetime Error Log Entries:  0
00:18:37.299  Warning Temperature Time:    0 minutes
00:18:37.299  Critical Temperature Time:   0 minutes
00:18:37.299  
00:18:37.299  Number of Queues
00:18:37.299  ================
00:18:37.299  Number of I/O Submission Queues:      127
00:18:37.299  Number of I/O Completion Queues:      127
00:18:37.299  
00:18:37.299  Active Namespaces
00:18:37.299  =================
00:18:37.299  Namespace ID:1
00:18:37.299  Error Recovery Timeout:                Unlimited
00:18:37.299  Command Set Identifier:                NVM (00h)
00:18:37.299  Deallocate:                            Supported
00:18:37.299  Deallocated/Unwritten Error:           Not Supported
00:18:37.299  Deallocated Read Value:                Unknown
00:18:37.299  Deallocate in Write Zeroes:            Not Supported
00:18:37.299  Deallocated Guard Field:               0xFFFF
00:18:37.299  Flush:                                 Supported
00:18:37.299  Reservation:                           Supported
00:18:37.299  Namespace Sharing Capabilities:        Multiple Controllers
00:18:37.299  Size (in LBAs):                        131072 (0GiB)
00:18:37.299  Capacity (in LBAs):                    131072 (0GiB)
00:18:37.299  Utilization (in LBAs):                 131072 (0GiB)
00:18:37.299  NGUID:                                 BB6029EA4E674629A5C816EFB5D2CD88
00:18:37.299  UUID:                                  bb6029ea-4e67-4629-a5c8-16efb5d2cd88
00:18:37.299  Thin Provisioning:                     Not Supported
00:18:37.299  Per-NS Atomic Units:                   Yes
00:18:37.299    Atomic Boundary Size (Normal):       0
00:18:37.299    Atomic Boundary Size (PFail):        0
00:18:37.299    Atomic Boundary Offset:              0
00:18:37.299  Maximum Single Source Range Length:    65535
00:18:37.299  Maximum Copy Length:                   65535
00:18:37.299  Maximum Source Range Count:            1
00:18:37.299  NGUID/EUI64 Never Reused:              No
00:18:37.299  Namespace Write Protected:             No
00:18:37.299  Number of LBA Formats:                 1
00:18:37.299  Current LBA Format:                    LBA Format #00
00:18:37.299  LBA Format #00: Data Size:   512  Metadata Size:     0
00:18:37.299  
00:18:37.299   23:59:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2
00:18:37.299  [2024-12-09 23:59:53.113241] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:18:42.573  Initializing NVMe Controllers
00:18:42.573  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:18:42.573  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1
00:18:42.573  Initialization complete. Launching workers.
00:18:42.573  ========================================================
00:18:42.574                                                                                                           Latency(us)
00:18:42.574  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:18:42.574  VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core  1:   39926.56     155.96    3205.49     959.39   10603.24
00:18:42.574  ========================================================
00:18:42.574  Total                                                                :   39926.56     155.96    3205.49     959.39   10603.24
00:18:42.574  
00:18:42.574  [2024-12-09 23:59:58.135804] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:18:42.574   23:59:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2
00:18:42.574  [2024-12-09 23:59:58.371858] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:18:47.843  Initializing NVMe Controllers
00:18:47.843  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:18:47.843  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1
00:18:47.843  Initialization complete. Launching workers.
00:18:47.843  ========================================================
00:18:47.843                                                                                                           Latency(us)
00:18:47.843  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:18:47.843  VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core  1:   16046.00      62.68    7987.03    5984.42   15487.65
00:18:47.843  ========================================================
00:18:47.843  Total                                                                :   16046.00      62.68    7987.03    5984.42   15487.65
00:18:47.843  
00:18:47.843  [2024-12-10 00:00:03.409020] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:18:47.843   00:00:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE
00:18:47.843  [2024-12-10 00:00:03.614021] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:18:53.107  [2024-12-10 00:00:08.703585] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:18:53.108  Initializing NVMe Controllers
00:18:53.108  Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:18:53.108  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1
00:18:53.108  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1
00:18:53.108  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2
00:18:53.108  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3
00:18:53.108  Initialization complete. Launching workers.
00:18:53.108  Starting thread on core 2
00:18:53.108  Starting thread on core 3
00:18:53.108  Starting thread on core 1
00:18:53.108   00:00:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g
00:18:53.365  [2024-12-10 00:00:08.993249] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:18:57.546  [2024-12-10 00:00:12.986389] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:18:57.546  Initializing NVMe Controllers
00:18:57.546  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:18:57.546  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:18:57.546  Associating SPDK bdev Controller (SPDK1               ) with lcore 0
00:18:57.546  Associating SPDK bdev Controller (SPDK1               ) with lcore 1
00:18:57.546  Associating SPDK bdev Controller (SPDK1               ) with lcore 2
00:18:57.546  Associating SPDK bdev Controller (SPDK1               ) with lcore 3
00:18:57.546  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration:
00:18:57.546  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1
00:18:57.546  Initialization complete. Launching workers.
00:18:57.546  Starting thread on core 1 with urgent priority queue
00:18:57.546  Starting thread on core 2 with urgent priority queue
00:18:57.546  Starting thread on core 3 with urgent priority queue
00:18:57.546  Starting thread on core 0 with urgent priority queue
00:18:57.546  SPDK bdev Controller (SPDK1               ) core 0:  1499.00 IO/s    66.71 secs/100000 ios
00:18:57.546  SPDK bdev Controller (SPDK1               ) core 1:  1646.00 IO/s    60.75 secs/100000 ios
00:18:57.546  SPDK bdev Controller (SPDK1               ) core 2:  2083.67 IO/s    47.99 secs/100000 ios
00:18:57.546  SPDK bdev Controller (SPDK1               ) core 3:  2330.67 IO/s    42.91 secs/100000 ios
00:18:57.546  ========================================================
00:18:57.546  
00:18:57.546   00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1'
00:18:57.546  [2024-12-10 00:00:13.278602] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:18:57.546  Initializing NVMe Controllers
00:18:57.546  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:18:57.546  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:18:57.546    Namespace ID: 1 size: 0GB
00:18:57.546  Initialization complete.
00:18:57.546  INFO: using host memory buffer for IO
00:18:57.546  Hello world!
00:18:57.546  [2024-12-10 00:00:13.312805] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:18:57.546   00:00:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1'
00:18:57.805  [2024-12-10 00:00:13.592594] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:18:59.184  Initializing NVMe Controllers
00:18:59.185  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:18:59.185  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:18:59.185  Initialization complete. Launching workers.
00:18:59.185  submit (in ns)   avg, min, max =   5601.3,   3137.1, 3999981.9
00:18:59.185  complete (in ns) avg, min, max =  20622.3,   1707.6, 3999579.0
00:18:59.185  
00:18:59.185  Submit histogram
00:18:59.185  ================
00:18:59.185         Range in us     Cumulative     Count
00:18:59.185      3.124 -     3.139:    0.0062%  (        1)
00:18:59.185      3.139 -     3.154:    0.0247%  (        3)
00:18:59.185      3.154 -     3.170:    0.0371%  (        2)
00:18:59.185      3.170 -     3.185:    0.0865%  (        8)
00:18:59.185      3.185 -     3.200:    0.3893%  (       49)
00:18:59.185      3.200 -     3.215:    1.4583%  (      173)
00:18:59.185      3.215 -     3.230:    4.8384%  (      547)
00:18:59.185      3.230 -     3.246:    9.6521%  (      779)
00:18:59.185      3.246 -     3.261:   15.4298%  (      935)
00:18:59.185      3.261 -     3.276:   22.3135%  (     1114)
00:18:59.185      3.276 -     3.291:   28.7771%  (     1046)
00:18:59.185      3.291 -     3.307:   34.3694%  (      905)
00:18:59.185      3.307 -     3.322:   39.3808%  (      811)
00:18:59.185      3.322 -     3.337:   45.0967%  (      925)
00:18:59.185      3.337 -     3.352:   49.2554%  (      673)
00:18:59.185      3.352 -     3.368:   53.7787%  (      732)
00:18:59.185      3.368 -     3.383:   60.7057%  (     1121)
00:18:59.185      3.383 -     3.398:   66.5328%  (      943)
00:18:59.185      3.398 -     3.413:   72.0633%  (      895)
00:18:59.185      3.413 -     3.429:   76.9882%  (      797)
00:18:59.185      3.429 -     3.444:   81.0480%  (      657)
00:18:59.185      3.444 -     3.459:   83.7855%  (      443)
00:18:59.185      3.459 -     3.474:   85.5775%  (      290)
00:18:59.185      3.474 -     3.490:   86.6712%  (      177)
00:18:59.185      3.490 -     3.505:   87.4807%  (      131)
00:18:59.185      3.505 -     3.520:   88.1851%  (      114)
00:18:59.185      3.520 -     3.535:   88.9575%  (      125)
00:18:59.185      3.535 -     3.550:   89.9401%  (      159)
00:18:59.185      3.550 -     3.566:   90.8546%  (      148)
00:18:59.185      3.566 -     3.581:   91.6765%  (      133)
00:18:59.185      3.581 -     3.596:   92.4489%  (      125)
00:18:59.185      3.596 -     3.611:   93.3140%  (      140)
00:18:59.185      3.611 -     3.627:   94.2223%  (      147)
00:18:59.185      3.627 -     3.642:   95.0998%  (      142)
00:18:59.185      3.642 -     3.657:   95.8784%  (      126)
00:18:59.185      3.657 -     3.672:   96.7559%  (      142)
00:18:59.185      3.672 -     3.688:   97.3491%  (       96)
00:18:59.185      3.688 -     3.703:   98.0412%  (      112)
00:18:59.185      3.703 -     3.718:   98.4552%  (       67)
00:18:59.185      3.718 -     3.733:   98.8383%  (       62)
00:18:59.185      3.733 -     3.749:   99.0669%  (       37)
00:18:59.185      3.749 -     3.764:   99.3388%  (       44)
00:18:59.185      3.764 -     3.779:   99.4809%  (       23)
00:18:59.185      3.779 -     3.794:   99.5983%  (       19)
00:18:59.185      3.794 -     3.810:   99.6478%  (        8)
00:18:59.185      3.810 -     3.825:   99.6849%  (        6)
00:18:59.185      3.855 -     3.870:   99.6972%  (        2)
00:18:59.185      3.870 -     3.886:   99.7096%  (        2)
00:18:59.185      4.114 -     4.145:   99.7158%  (        1)
00:18:59.185      5.303 -     5.333:   99.7219%  (        1)
00:18:59.185      5.547 -     5.577:   99.7281%  (        1)
00:18:59.185      5.638 -     5.669:   99.7343%  (        1)
00:18:59.185      5.669 -     5.699:   99.7405%  (        1)
00:18:59.185      5.882 -     5.912:   99.7466%  (        1)
00:18:59.185      6.065 -     6.095:   99.7528%  (        1)
00:18:59.185      6.126 -     6.156:   99.7590%  (        1)
00:18:59.185      6.187 -     6.217:   99.7652%  (        1)
00:18:59.185      6.248 -     6.278:   99.7714%  (        1)
00:18:59.185      6.339 -     6.370:   99.7775%  (        1)
00:18:59.185      6.370 -     6.400:   99.7837%  (        1)
00:18:59.185      6.461 -     6.491:   99.7961%  (        2)
00:18:59.185      6.552 -     6.583:   99.8084%  (        2)
00:18:59.185      6.644 -     6.674:   99.8146%  (        1)
00:18:59.185      6.674 -     6.705:   99.8332%  (        3)
00:18:59.185      6.705 -     6.735:   99.8393%  (        1)
00:18:59.185      6.735 -     6.766:   99.8455%  (        1)
00:18:59.185      6.857 -     6.888:   99.8579%  (        2)
00:18:59.185      6.918 -     6.949:   99.8641%  (        1)
00:18:59.185      7.010 -     7.040:   99.8702%  (        1)
00:18:59.185      7.040 -     7.070:   99.8764%  (        1)
00:18:59.185      7.192 -     7.223:   99.8826%  (        1)
00:18:59.185      7.223 -     7.253:   99.8888%  (        1)
00:18:59.185      7.375 -     7.406:   99.8950%  (        1)
00:18:59.185      7.467 -     7.497:   99.9011%  (        1)
00:18:59.185      7.528 -     7.558:   99.9073%  (        1)
00:18:59.185      7.558 -     7.589:   99.9135%  (        1)
00:18:59.185      7.741 -     7.771:   99.9197%  (        1)
00:18:59.185      7.863 -     7.924:   99.9258%  (        1)
00:18:59.185      7.985 -     8.046:   99.9320%  (        1)
00:18:59.185   [2024-12-10 00:00:14.616638] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:18:59.185     8.716 -     8.777:   99.9382%  (        1)
00:18:59.185     13.105 -    13.166:   99.9444%  (        1)
00:18:59.185   3994.575 -  4025.783:  100.0000%  (        9)
00:18:59.185  
00:18:59.185  Complete histogram
00:18:59.185  ==================
00:18:59.185         Range in us     Cumulative     Count
00:18:59.185      1.707 -     1.714:    0.0247%  (        4)
00:18:59.185      1.714 -     1.722:    0.1359%  (       18)
00:18:59.185      1.722 -     1.730:    0.2286%  (       15)
00:18:59.185      1.730 -     1.737:    0.2595%  (        5)
00:18:59.185      1.737 -     1.745:    0.2657%  (        1)
00:18:59.185      1.745 -     1.752:    0.2904%  (        4)
00:18:59.185      1.752 -     1.760:    1.0319%  (      120)
00:18:59.185      1.760 -     1.768:   11.1846%  (     1643)
00:18:59.185      1.768 -     1.775:   41.5807%  (     4919)
00:18:59.185      1.775 -     1.783:   66.8603%  (     4091)
00:18:59.185      1.783 -     1.790:   74.4300%  (     1225)
00:18:59.185      1.790 -     1.798:   77.9769%  (      574)
00:18:59.185      1.798 -     1.806:   81.2519%  (      530)
00:18:59.185      1.806 -     1.813:   83.2046%  (      316)
00:18:59.185      1.813 -     1.821:   86.0471%  (      460)
00:18:59.185      1.821 -     1.829:   90.4035%  (      705)
00:18:59.185      1.829 -     1.836:   93.4376%  (      491)
00:18:59.185      1.836 -     1.844:   95.3779%  (      314)
00:18:59.185      1.844 -     1.851:   96.7620%  (      224)
00:18:59.185      1.851 -     1.859:   97.8496%  (      176)
00:18:59.185      1.859 -     1.867:   98.3810%  (       86)
00:18:59.185      1.867 -     1.874:   98.6529%  (       44)
00:18:59.185      1.874 -     1.882:   98.8321%  (       29)
00:18:59.185      1.882 -     1.890:   98.9619%  (       21)
00:18:59.185      1.890 -     1.897:   99.0422%  (       13)
00:18:59.185      1.897 -     1.905:   99.1040%  (       10)
00:18:59.185      1.905 -     1.912:   99.1534%  (        8)
00:18:59.185      1.912 -     1.920:   99.1658%  (        2)
00:18:59.185      1.920 -     1.928:   99.1781%  (        2)
00:18:59.185      1.928 -     1.935:   99.1905%  (        2)
00:18:59.185      1.935 -     1.943:   99.1967%  (        1)
00:18:59.185      1.943 -     1.950:   99.2090%  (        2)
00:18:59.185      1.950 -     1.966:   99.2585%  (        8)
00:18:59.185      1.966 -     1.981:   99.2647%  (        1)
00:18:59.185      1.981 -     1.996:   99.2708%  (        1)
00:18:59.185      2.011 -     2.027:   99.2770%  (        1)
00:18:59.185      2.027 -     2.042:   99.2832%  (        1)
00:18:59.185      2.057 -     2.072:   99.2894%  (        1)
00:18:59.185      3.520 -     3.535:   99.2956%  (        1)
00:18:59.185      3.596 -     3.611:   99.3017%  (        1)
00:18:59.185      3.611 -     3.627:   99.3079%  (        1)
00:18:59.185      3.718 -     3.733:   99.3141%  (        1)
00:18:59.185      3.962 -     3.992:   99.3265%  (        2)
00:18:59.185      4.114 -     4.145:   99.3326%  (        1)
00:18:59.185      4.206 -     4.236:   99.3388%  (        1)
00:18:59.185      4.328 -     4.358:   99.3450%  (        1)
00:18:59.185      4.450 -     4.480:   99.3512%  (        1)
00:18:59.185      4.510 -     4.541:   99.3574%  (        1)
00:18:59.185      4.541 -     4.571:   99.3635%  (        1)
00:18:59.185      4.693 -     4.724:   99.3759%  (        2)
00:18:59.185      4.785 -     4.815:   99.3821%  (        1)
00:18:59.185      4.876 -     4.907:   99.3882%  (        1)
00:18:59.185      4.907 -     4.937:   99.3944%  (        1)
00:18:59.185      4.998 -     5.029:   99.4006%  (        1)
00:18:59.185      5.090 -     5.120:   99.4068%  (        1)
00:18:59.185      5.120 -     5.150:   99.4130%  (        1)
00:18:59.185      5.211 -     5.242:   99.4191%  (        1)
00:18:59.185      5.242 -     5.272:   99.4253%  (        1)
00:18:59.185      5.333 -     5.364:   99.4315%  (        1)
00:18:59.185      5.455 -     5.486:   99.4377%  (        1)
00:18:59.185      5.699 -     5.730:   99.4439%  (        1)
00:18:59.185      5.760 -     5.790:   99.4500%  (        1)
00:18:59.185      6.095 -     6.126:   99.4562%  (        1)
00:18:59.185      6.187 -     6.217:   99.4624%  (        1)
00:18:59.185      6.491 -     6.522:   99.4686%  (        1)
00:18:59.185      6.644 -     6.674:   99.4748%  (        1)
00:18:59.185      6.674 -     6.705:   99.4809%  (        1)
00:18:59.185      7.589 -     7.619:   99.4871%  (        1)
00:18:59.186      9.143 -     9.204:   99.4933%  (        1)
00:18:59.186     10.301 -    10.362:   99.4995%  (        1)
00:18:59.186     11.459 -    11.520:   99.5057%  (        1)
00:18:59.186     12.130 -    12.190:   99.5118%  (        1)
00:18:59.186     17.432 -    17.554:   99.5180%  (        1)
00:18:59.186     63.878 -    64.366:   99.5242%  (        1)
00:18:59.186   1022.050 -  1029.851:   99.5304%  (        1)
00:18:59.186   3994.575 -  4025.783:  100.0000%  (       76)
00:18:59.186  
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems
00:18:59.186  [
00:18:59.186    {
00:18:59.186      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:18:59.186      "subtype": "Discovery",
00:18:59.186      "listen_addresses": [],
00:18:59.186      "allow_any_host": true,
00:18:59.186      "hosts": []
00:18:59.186    },
00:18:59.186    {
00:18:59.186      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:18:59.186      "subtype": "NVMe",
00:18:59.186      "listen_addresses": [
00:18:59.186        {
00:18:59.186          "trtype": "VFIOUSER",
00:18:59.186          "adrfam": "IPv4",
00:18:59.186          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:18:59.186          "trsvcid": "0"
00:18:59.186        }
00:18:59.186      ],
00:18:59.186      "allow_any_host": true,
00:18:59.186      "hosts": [],
00:18:59.186      "serial_number": "SPDK1",
00:18:59.186      "model_number": "SPDK bdev Controller",
00:18:59.186      "max_namespaces": 32,
00:18:59.186      "min_cntlid": 1,
00:18:59.186      "max_cntlid": 65519,
00:18:59.186      "namespaces": [
00:18:59.186        {
00:18:59.186          "nsid": 1,
00:18:59.186          "bdev_name": "Malloc1",
00:18:59.186          "name": "Malloc1",
00:18:59.186          "nguid": "BB6029EA4E674629A5C816EFB5D2CD88",
00:18:59.186          "uuid": "bb6029ea-4e67-4629-a5c8-16efb5d2cd88"
00:18:59.186        }
00:18:59.186      ]
00:18:59.186    },
00:18:59.186    {
00:18:59.186      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:18:59.186      "subtype": "NVMe",
00:18:59.186      "listen_addresses": [
00:18:59.186        {
00:18:59.186          "trtype": "VFIOUSER",
00:18:59.186          "adrfam": "IPv4",
00:18:59.186          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:18:59.186          "trsvcid": "0"
00:18:59.186        }
00:18:59.186      ],
00:18:59.186      "allow_any_host": true,
00:18:59.186      "hosts": [],
00:18:59.186      "serial_number": "SPDK2",
00:18:59.186      "model_number": "SPDK bdev Controller",
00:18:59.186      "max_namespaces": 32,
00:18:59.186      "min_cntlid": 1,
00:18:59.186      "max_cntlid": 65519,
00:18:59.186      "namespaces": [
00:18:59.186        {
00:18:59.186          "nsid": 1,
00:18:59.186          "bdev_name": "Malloc2",
00:18:59.186          "name": "Malloc2",
00:18:59.186          "nguid": "05951107F6A4453CAADFDA297DCF898E",
00:18:59.186          "uuid": "05951107-f6a4-453c-aadf-da297dcf898e"
00:18:59.186        }
00:18:59.186      ]
00:18:59.186    }
00:18:59.186  ]
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r '		trtype:VFIOUSER 		traddr:/var/run/vfio-user/domain/vfio-user1/1 		subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3057883
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file
00:18:59.186   00:00:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3
00:18:59.186  [2024-12-10 00:00:15.014552] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller
00:18:59.444  Malloc3
00:18:59.444   00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2
00:18:59.444  [2024-12-10 00:00:15.264419] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller
00:18:59.444   00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems
00:18:59.444  Asynchronous Event Request test
00:18:59.444  Attaching to /var/run/vfio-user/domain/vfio-user1/1
00:18:59.444  Attached to /var/run/vfio-user/domain/vfio-user1/1
00:18:59.444  Registering asynchronous event callbacks...
00:18:59.444  Starting namespace attribute notice tests for all controllers...
00:18:59.444  /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00
00:18:59.444  aer_cb - Changed Namespace
00:18:59.444  Cleaning up...
00:18:59.703  [
00:18:59.703    {
00:18:59.703      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:18:59.703      "subtype": "Discovery",
00:18:59.703      "listen_addresses": [],
00:18:59.703      "allow_any_host": true,
00:18:59.703      "hosts": []
00:18:59.703    },
00:18:59.703    {
00:18:59.703      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:18:59.703      "subtype": "NVMe",
00:18:59.703      "listen_addresses": [
00:18:59.703        {
00:18:59.703          "trtype": "VFIOUSER",
00:18:59.703          "adrfam": "IPv4",
00:18:59.703          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:18:59.703          "trsvcid": "0"
00:18:59.703        }
00:18:59.703      ],
00:18:59.703      "allow_any_host": true,
00:18:59.703      "hosts": [],
00:18:59.703      "serial_number": "SPDK1",
00:18:59.703      "model_number": "SPDK bdev Controller",
00:18:59.703      "max_namespaces": 32,
00:18:59.703      "min_cntlid": 1,
00:18:59.703      "max_cntlid": 65519,
00:18:59.703      "namespaces": [
00:18:59.703        {
00:18:59.703          "nsid": 1,
00:18:59.703          "bdev_name": "Malloc1",
00:18:59.703          "name": "Malloc1",
00:18:59.703          "nguid": "BB6029EA4E674629A5C816EFB5D2CD88",
00:18:59.703          "uuid": "bb6029ea-4e67-4629-a5c8-16efb5d2cd88"
00:18:59.703        },
00:18:59.703        {
00:18:59.703          "nsid": 2,
00:18:59.703          "bdev_name": "Malloc3",
00:18:59.703          "name": "Malloc3",
00:18:59.703          "nguid": "DB75ECA14DD94882A22393FC97EEF242",
00:18:59.703          "uuid": "db75eca1-4dd9-4882-a223-93fc97eef242"
00:18:59.703        }
00:18:59.703      ]
00:18:59.703    },
00:18:59.703    {
00:18:59.703      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:18:59.703      "subtype": "NVMe",
00:18:59.703      "listen_addresses": [
00:18:59.703        {
00:18:59.703          "trtype": "VFIOUSER",
00:18:59.703          "adrfam": "IPv4",
00:18:59.703          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:18:59.703          "trsvcid": "0"
00:18:59.703        }
00:18:59.703      ],
00:18:59.703      "allow_any_host": true,
00:18:59.703      "hosts": [],
00:18:59.703      "serial_number": "SPDK2",
00:18:59.703      "model_number": "SPDK bdev Controller",
00:18:59.703      "max_namespaces": 32,
00:18:59.703      "min_cntlid": 1,
00:18:59.703      "max_cntlid": 65519,
00:18:59.703      "namespaces": [
00:18:59.703        {
00:18:59.703          "nsid": 1,
00:18:59.703          "bdev_name": "Malloc2",
00:18:59.703          "name": "Malloc2",
00:18:59.703          "nguid": "05951107F6A4453CAADFDA297DCF898E",
00:18:59.703          "uuid": "05951107-f6a4-453c-aadf-da297dcf898e"
00:18:59.703        }
00:18:59.703      ]
00:18:59.703    }
00:18:59.703  ]
00:18:59.703   00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3057883
00:18:59.703   00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES)
00:18:59.703   00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2
00:18:59.703   00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2
00:18:59.703   00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci
00:18:59.703  [2024-12-10 00:00:15.514662] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:18:59.703  [2024-12-10 00:00:15.514710] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3057904 ]
00:18:59.703  [2024-12-10 00:00:15.553504] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2
00:18:59.703  [2024-12-10 00:00:15.558747] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32
00:18:59.703  [2024-12-10 00:00:15.558772] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f066ebcc000
00:18:59.703  [2024-12-10 00:00:15.559744] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:19:00.022  [2024-12-10 00:00:15.560752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:19:00.022  [2024-12-10 00:00:15.561764] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:19:00.022  [2024-12-10 00:00:15.562781] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0
00:19:00.022  [2024-12-10 00:00:15.563780] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:19:00.022  [2024-12-10 00:00:15.564788] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:19:00.022  [2024-12-10 00:00:15.565799] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0
00:19:00.022  [2024-12-10 00:00:15.566805] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0
00:19:00.022  [2024-12-10 00:00:15.567819] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32
00:19:00.022  [2024-12-10 00:00:15.567831] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f066ebc1000
00:19:00.022  [2024-12-10 00:00:15.568747] vfio_user_pci.c:  65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:19:00.022  [2024-12-10 00:00:15.578120] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully
00:19:00.022  [2024-12-10 00:00:15.578142] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout)
00:19:00.022  [2024-12-10 00:00:15.582220] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff
00:19:00.022  [2024-12-10 00:00:15.582255] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192
00:19:00.022  [2024-12-10 00:00:15.582327] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout)
00:19:00.022  [2024-12-10 00:00:15.582341] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout)
00:19:00.022  [2024-12-10 00:00:15.582346] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout)
00:19:00.022  [2024-12-10 00:00:15.583228] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300
00:19:00.022  [2024-12-10 00:00:15.583237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout)
00:19:00.022  [2024-12-10 00:00:15.583244] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout)
00:19:00.022  [2024-12-10 00:00:15.584226] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff
00:19:00.022  [2024-12-10 00:00:15.584235] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout)
00:19:00.022  [2024-12-10 00:00:15.584242] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms)
00:19:00.022  [2024-12-10 00:00:15.585231] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0
00:19:00.022  [2024-12-10 00:00:15.585240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:19:00.022  [2024-12-10 00:00:15.586235] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0
00:19:00.022  [2024-12-10 00:00:15.586243] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0
00:19:00.022  [2024-12-10 00:00:15.586248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms)
00:19:00.022  [2024-12-10 00:00:15.586253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:19:00.022  [2024-12-10 00:00:15.586360] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1
00:19:00.022  [2024-12-10 00:00:15.586365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:19:00.022  [2024-12-10 00:00:15.586369] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000
00:19:00.022  [2024-12-10 00:00:15.589171] nvme_vfio_user.c:  61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000
00:19:00.022  [2024-12-10 00:00:15.589261] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff
00:19:00.022  [2024-12-10 00:00:15.590268] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001
00:19:00.022  [2024-12-10 00:00:15.591267] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:19:00.022  [2024-12-10 00:00:15.591303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:19:00.022  [2024-12-10 00:00:15.592279] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1
00:19:00.022  [2024-12-10 00:00:15.592288] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:19:00.022  [2024-12-10 00:00:15.592292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms)
00:19:00.022  [2024-12-10 00:00:15.592309] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout)
00:19:00.022  [2024-12-10 00:00:15.592316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms)
00:19:00.022  [2024-12-10 00:00:15.592331] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:19:00.022  [2024-12-10 00:00:15.592335] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:19:00.022  [2024-12-10 00:00:15.592338] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:19:00.022  [2024-12-10 00:00:15.592348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:19:00.022  [2024-12-10 00:00:15.600173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0
00:19:00.022  [2024-12-10 00:00:15.600183] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072
00:19:00.022  [2024-12-10 00:00:15.600188] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072
00:19:00.022  [2024-12-10 00:00:15.600192] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001
00:19:00.022  [2024-12-10 00:00:15.600196] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000
00:19:00.022  [2024-12-10 00:00:15.600200] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1
00:19:00.022  [2024-12-10 00:00:15.600204] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1
00:19:00.022  [2024-12-10 00:00:15.600208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms)
00:19:00.022  [2024-12-10 00:00:15.600215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.600224] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0
00:19:00.023  [2024-12-10 00:00:15.608172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0
00:19:00.023  [2024-12-10 00:00:15.608183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:19:00.023  [2024-12-10 00:00:15.608191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:19:00.023  [2024-12-10 00:00:15.608198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:19:00.023  [2024-12-10 00:00:15.608208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:19:00.023  [2024-12-10 00:00:15.608213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.608223] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.608231] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0
00:19:00.023  [2024-12-10 00:00:15.616172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0
00:19:00.023  [2024-12-10 00:00:15.616179] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms
00:19:00.023  [2024-12-10 00:00:15.616183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.616191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.616197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.616204] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:19:00.023  [2024-12-10 00:00:15.624177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0
00:19:00.023  [2024-12-10 00:00:15.624229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.624236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.624244] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096
00:19:00.023  [2024-12-10 00:00:15.624248] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000
00:19:00.023  [2024-12-10 00:00:15.624251] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:19:00.023  [2024-12-10 00:00:15.624257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0
00:19:00.023  [2024-12-10 00:00:15.632173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0
00:19:00.023  [2024-12-10 00:00:15.632186] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added
00:19:00.023  [2024-12-10 00:00:15.632193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.632199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.632206] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:19:00.023  [2024-12-10 00:00:15.632210] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:19:00.023  [2024-12-10 00:00:15.632213] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:19:00.023  [2024-12-10 00:00:15.632219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:19:00.023  [2024-12-10 00:00:15.640174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0
00:19:00.023  [2024-12-10 00:00:15.640186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.640193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.640199] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096
00:19:00.023  [2024-12-10 00:00:15.640203] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:19:00.023  [2024-12-10 00:00:15.640206] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:19:00.023  [2024-12-10 00:00:15.640212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:19:00.023  [2024-12-10 00:00:15.648171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0
00:19:00.023  [2024-12-10 00:00:15.648183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.648189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.648196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.648201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.648205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.648210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.648214] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID
00:19:00.023  [2024-12-10 00:00:15.648218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms)
00:19:00.023  [2024-12-10 00:00:15.648223] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout)
00:19:00.023  [2024-12-10 00:00:15.648238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0
00:19:00.023  [2024-12-10 00:00:15.656172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0
00:19:00.023  [2024-12-10 00:00:15.656184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0
00:19:00.023  [2024-12-10 00:00:15.664172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0
00:19:00.023  [2024-12-10 00:00:15.664183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0
00:19:00.023  [2024-12-10 00:00:15.672170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0
00:19:00.023  [2024-12-10 00:00:15.672182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:19:00.023  [2024-12-10 00:00:15.680173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0
00:19:00.023  [2024-12-10 00:00:15.680189] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192
00:19:00.023  [2024-12-10 00:00:15.680194] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000
00:19:00.023  [2024-12-10 00:00:15.680197] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000
00:19:00.023  [2024-12-10 00:00:15.680200] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000
00:19:00.023  [2024-12-10 00:00:15.680203] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2
00:19:00.023  [2024-12-10 00:00:15.680209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000
00:19:00.023  [2024-12-10 00:00:15.680216] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512
00:19:00.023  [2024-12-10 00:00:15.680220] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000
00:19:00.023  [2024-12-10 00:00:15.680223] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:19:00.023  [2024-12-10 00:00:15.680228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0
00:19:00.023  [2024-12-10 00:00:15.680234] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512
00:19:00.024  [2024-12-10 00:00:15.680238] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000
00:19:00.024  [2024-12-10 00:00:15.680241] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:19:00.024  [2024-12-10 00:00:15.680246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0
00:19:00.024  [2024-12-10 00:00:15.680253] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096
00:19:00.024  [2024-12-10 00:00:15.680256] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000
00:19:00.024  [2024-12-10 00:00:15.680260] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1
00:19:00.024  [2024-12-10 00:00:15.680265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0
00:19:00.024  [2024-12-10 00:00:15.688171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0
00:19:00.024  [2024-12-10 00:00:15.688184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0
00:19:00.024  [2024-12-10 00:00:15.688194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0
00:19:00.024  [2024-12-10 00:00:15.688200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0
00:19:00.024  =====================================================
00:19:00.024  NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:19:00.024  =====================================================
00:19:00.024  Controller Capabilities/Features
00:19:00.024  ================================
00:19:00.024  Vendor ID:                             4e58
00:19:00.024  Subsystem Vendor ID:                   4e58
00:19:00.024  Serial Number:                         SPDK2
00:19:00.024  Model Number:                          SPDK bdev Controller
00:19:00.024  Firmware Version:                      25.01
00:19:00.024  Recommended Arb Burst:                 6
00:19:00.024  IEEE OUI Identifier:                   8d 6b 50
00:19:00.024  Multi-path I/O
00:19:00.024    May have multiple subsystem ports:   Yes
00:19:00.024    May have multiple controllers:       Yes
00:19:00.024    Associated with SR-IOV VF:           No
00:19:00.024  Max Data Transfer Size:                131072
00:19:00.024  Max Number of Namespaces:              32
00:19:00.024  Max Number of I/O Queues:              127
00:19:00.024  NVMe Specification Version (VS):       1.3
00:19:00.024  NVMe Specification Version (Identify): 1.3
00:19:00.024  Maximum Queue Entries:                 256
00:19:00.024  Contiguous Queues Required:            Yes
00:19:00.024  Arbitration Mechanisms Supported
00:19:00.024    Weighted Round Robin:                Not Supported
00:19:00.024    Vendor Specific:                     Not Supported
00:19:00.024  Reset Timeout:                         15000 ms
00:19:00.024  Doorbell Stride:                       4 bytes
00:19:00.024  NVM Subsystem Reset:                   Not Supported
00:19:00.024  Command Sets Supported
00:19:00.024    NVM Command Set:                     Supported
00:19:00.024  Boot Partition:                        Not Supported
00:19:00.024  Memory Page Size Minimum:              4096 bytes
00:19:00.024  Memory Page Size Maximum:              4096 bytes
00:19:00.024  Persistent Memory Region:              Not Supported
00:19:00.024  Optional Asynchronous Events Supported
00:19:00.024    Namespace Attribute Notices:         Supported
00:19:00.024    Firmware Activation Notices:         Not Supported
00:19:00.024    ANA Change Notices:                  Not Supported
00:19:00.024    PLE Aggregate Log Change Notices:    Not Supported
00:19:00.024    LBA Status Info Alert Notices:       Not Supported
00:19:00.024    EGE Aggregate Log Change Notices:    Not Supported
00:19:00.024    Normal NVM Subsystem Shutdown event: Not Supported
00:19:00.024    Zone Descriptor Change Notices:      Not Supported
00:19:00.024    Discovery Log Change Notices:        Not Supported
00:19:00.024  Controller Attributes
00:19:00.024    128-bit Host Identifier:             Supported
00:19:00.024    Non-Operational Permissive Mode:     Not Supported
00:19:00.024    NVM Sets:                            Not Supported
00:19:00.024    Read Recovery Levels:                Not Supported
00:19:00.024    Endurance Groups:                    Not Supported
00:19:00.024    Predictable Latency Mode:            Not Supported
00:19:00.024    Traffic Based Keep ALive:            Not Supported
00:19:00.024    Namespace Granularity:               Not Supported
00:19:00.024    SQ Associations:                     Not Supported
00:19:00.024    UUID List:                           Not Supported
00:19:00.024    Multi-Domain Subsystem:              Not Supported
00:19:00.024    Fixed Capacity Management:           Not Supported
00:19:00.024    Variable Capacity Management:        Not Supported
00:19:00.024    Delete Endurance Group:              Not Supported
00:19:00.024    Delete NVM Set:                      Not Supported
00:19:00.024    Extended LBA Formats Supported:      Not Supported
00:19:00.024    Flexible Data Placement Supported:   Not Supported
00:19:00.024  
00:19:00.024  Controller Memory Buffer Support
00:19:00.024  ================================
00:19:00.024  Supported:                             No
00:19:00.024  
00:19:00.024  Persistent Memory Region Support
00:19:00.024  ================================
00:19:00.024  Supported:                             No
00:19:00.024  
00:19:00.024  Admin Command Set Attributes
00:19:00.024  ============================
00:19:00.024  Security Send/Receive:                 Not Supported
00:19:00.024  Format NVM:                            Not Supported
00:19:00.024  Firmware Activate/Download:            Not Supported
00:19:00.024  Namespace Management:                  Not Supported
00:19:00.024  Device Self-Test:                      Not Supported
00:19:00.024  Directives:                            Not Supported
00:19:00.024  NVMe-MI:                               Not Supported
00:19:00.024  Virtualization Management:             Not Supported
00:19:00.024  Doorbell Buffer Config:                Not Supported
00:19:00.024  Get LBA Status Capability:             Not Supported
00:19:00.024  Command & Feature Lockdown Capability: Not Supported
00:19:00.024  Abort Command Limit:                   4
00:19:00.024  Async Event Request Limit:             4
00:19:00.024  Number of Firmware Slots:              N/A
00:19:00.024  Firmware Slot 1 Read-Only:             N/A
00:19:00.024  Firmware Activation Without Reset:     N/A
00:19:00.024  Multiple Update Detection Support:     N/A
00:19:00.024  Firmware Update Granularity:           No Information Provided
00:19:00.024  Per-Namespace SMART Log:               No
00:19:00.024  Asymmetric Namespace Access Log Page:  Not Supported
00:19:00.024  Subsystem NQN:                         nqn.2019-07.io.spdk:cnode2
00:19:00.024  Command Effects Log Page:              Supported
00:19:00.024  Get Log Page Extended Data:            Supported
00:19:00.024  Telemetry Log Pages:                   Not Supported
00:19:00.024  Persistent Event Log Pages:            Not Supported
00:19:00.024  Supported Log Pages Log Page:          May Support
00:19:00.024  Commands Supported & Effects Log Page: Not Supported
00:19:00.024  Feature Identifiers & Effects Log Page:May Support
00:19:00.024  NVMe-MI Commands & Effects Log Page:   May Support
00:19:00.024  Data Area 4 for Telemetry Log:         Not Supported
00:19:00.024  Error Log Page Entries Supported:      128
00:19:00.024  Keep Alive:                            Supported
00:19:00.024  Keep Alive Granularity:                10000 ms
00:19:00.024  
00:19:00.024  NVM Command Set Attributes
00:19:00.024  ==========================
00:19:00.024  Submission Queue Entry Size
00:19:00.024    Max:                       64
00:19:00.024    Min:                       64
00:19:00.024  Completion Queue Entry Size
00:19:00.024    Max:                       16
00:19:00.024    Min:                       16
00:19:00.024  Number of Namespaces:        32
00:19:00.024  Compare Command:             Supported
00:19:00.024  Write Uncorrectable Command: Not Supported
00:19:00.024  Dataset Management Command:  Supported
00:19:00.024  Write Zeroes Command:        Supported
00:19:00.024  Set Features Save Field:     Not Supported
00:19:00.024  Reservations:                Not Supported
00:19:00.024  Timestamp:                   Not Supported
00:19:00.024  Copy:                        Supported
00:19:00.024  Volatile Write Cache:        Present
00:19:00.024  Atomic Write Unit (Normal):  1
00:19:00.024  Atomic Write Unit (PFail):   1
00:19:00.024  Atomic Compare & Write Unit: 1
00:19:00.024  Fused Compare & Write:       Supported
00:19:00.024  Scatter-Gather List
00:19:00.024    SGL Command Set:           Supported (Dword aligned)
00:19:00.024    SGL Keyed:                 Not Supported
00:19:00.024    SGL Bit Bucket Descriptor: Not Supported
00:19:00.024    SGL Metadata Pointer:      Not Supported
00:19:00.024    Oversized SGL:             Not Supported
00:19:00.024    SGL Metadata Address:      Not Supported
00:19:00.024    SGL Offset:                Not Supported
00:19:00.024    Transport SGL Data Block:  Not Supported
00:19:00.024  Replay Protected Memory Block:  Not Supported
00:19:00.024  
00:19:00.024  Firmware Slot Information
00:19:00.024  =========================
00:19:00.024  Active slot:                 1
00:19:00.024  Slot 1 Firmware Revision:    25.01
00:19:00.024  
00:19:00.024  
00:19:00.024  Commands Supported and Effects
00:19:00.024  ==============================
00:19:00.025  Admin Commands
00:19:00.025  --------------
00:19:00.025                    Get Log Page (02h): Supported 
00:19:00.025                        Identify (06h): Supported 
00:19:00.025                           Abort (08h): Supported 
00:19:00.025                    Set Features (09h): Supported 
00:19:00.025                    Get Features (0Ah): Supported 
00:19:00.025      Asynchronous Event Request (0Ch): Supported 
00:19:00.025                      Keep Alive (18h): Supported 
00:19:00.025  I/O Commands
00:19:00.025  ------------
00:19:00.025                           Flush (00h): Supported LBA-Change 
00:19:00.025                           Write (01h): Supported LBA-Change 
00:19:00.025                            Read (02h): Supported 
00:19:00.025                         Compare (05h): Supported 
00:19:00.025                    Write Zeroes (08h): Supported LBA-Change 
00:19:00.025              Dataset Management (09h): Supported LBA-Change 
00:19:00.025                            Copy (19h): Supported LBA-Change 
00:19:00.025  
00:19:00.025  Error Log
00:19:00.025  =========
00:19:00.025  
00:19:00.025  Arbitration
00:19:00.025  ===========
00:19:00.025  Arbitration Burst:           1
00:19:00.025  
00:19:00.025  Power Management
00:19:00.025  ================
00:19:00.025  Number of Power States:          1
00:19:00.025  Current Power State:             Power State #0
00:19:00.025  Power State #0:
00:19:00.025    Max Power:                      0.00 W
00:19:00.025    Non-Operational State:         Operational
00:19:00.025    Entry Latency:                 Not Reported
00:19:00.025    Exit Latency:                  Not Reported
00:19:00.025    Relative Read Throughput:      0
00:19:00.025    Relative Read Latency:         0
00:19:00.025    Relative Write Throughput:     0
00:19:00.025    Relative Write Latency:        0
00:19:00.025    Idle Power:                     Not Reported
00:19:00.025    Active Power:                   Not Reported
00:19:00.025  Non-Operational Permissive Mode: Not Supported
00:19:00.025  
00:19:00.025  Health Information
00:19:00.025  ==================
00:19:00.025  Critical Warnings:
00:19:00.025    Available Spare Space:     OK
00:19:00.025    Temperature:               OK
00:19:00.025    Device Reliability:        OK
00:19:00.025    Read Only:                 No
00:19:00.025    Volatile Memory Backup:    OK
00:19:00.025  Current Temperature:         0 Kelvin (-273 Celsius)
00:19:00.025  Temperature Threshold:       0 Kelvin (-273 Celsius)
00:19:00.025  Available Spare:             0%
00:19:00.025  Available Sp[2024-12-10 00:00:15.688282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0
00:19:00.025  [2024-12-10 00:00:15.696172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0
00:19:00.025  [2024-12-10 00:00:15.696202] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD
00:19:00.025  [2024-12-10 00:00:15.696211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:19:00.025  [2024-12-10 00:00:15.696217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:19:00.025  [2024-12-10 00:00:15.696222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:19:00.025  [2024-12-10 00:00:15.696228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:19:00.025  [2024-12-10 00:00:15.696267] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001
00:19:00.025  [2024-12-10 00:00:15.696277] nvme_vfio_user.c:  49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001
00:19:00.025  [2024-12-10 00:00:15.697267] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:19:00.025  [2024-12-10 00:00:15.697310] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us
00:19:00.025  [2024-12-10 00:00:15.697316] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms
00:19:00.025  [2024-12-10 00:00:15.698278] nvme_vfio_user.c:  83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9
00:19:00.025  [2024-12-10 00:00:15.698289] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds
00:19:00.025  [2024-12-10 00:00:15.698341] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl
00:19:00.025  [2024-12-10 00:00:15.699293] vfio_user_pci.c:  96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000
00:19:00.025  are Threshold:   0%
00:19:00.025  Life Percentage Used:        0%
00:19:00.025  Data Units Read:             0
00:19:00.025  Data Units Written:          0
00:19:00.025  Host Read Commands:          0
00:19:00.025  Host Write Commands:         0
00:19:00.025  Controller Busy Time:        0 minutes
00:19:00.025  Power Cycles:                0
00:19:00.025  Power On Hours:              0 hours
00:19:00.025  Unsafe Shutdowns:            0
00:19:00.025  Unrecoverable Media Errors:  0
00:19:00.025  Lifetime Error Log Entries:  0
00:19:00.025  Warning Temperature Time:    0 minutes
00:19:00.025  Critical Temperature Time:   0 minutes
00:19:00.025  
00:19:00.025  Number of Queues
00:19:00.025  ================
00:19:00.025  Number of I/O Submission Queues:      127
00:19:00.025  Number of I/O Completion Queues:      127
00:19:00.025  
00:19:00.025  Active Namespaces
00:19:00.025  =================
00:19:00.025  Namespace ID:1
00:19:00.025  Error Recovery Timeout:                Unlimited
00:19:00.025  Command Set Identifier:                NVM (00h)
00:19:00.025  Deallocate:                            Supported
00:19:00.025  Deallocated/Unwritten Error:           Not Supported
00:19:00.025  Deallocated Read Value:                Unknown
00:19:00.025  Deallocate in Write Zeroes:            Not Supported
00:19:00.025  Deallocated Guard Field:               0xFFFF
00:19:00.025  Flush:                                 Supported
00:19:00.025  Reservation:                           Supported
00:19:00.025  Namespace Sharing Capabilities:        Multiple Controllers
00:19:00.025  Size (in LBAs):                        131072 (0GiB)
00:19:00.025  Capacity (in LBAs):                    131072 (0GiB)
00:19:00.025  Utilization (in LBAs):                 131072 (0GiB)
00:19:00.025  NGUID:                                 05951107F6A4453CAADFDA297DCF898E
00:19:00.025  UUID:                                  05951107-f6a4-453c-aadf-da297dcf898e
00:19:00.025  Thin Provisioning:                     Not Supported
00:19:00.025  Per-NS Atomic Units:                   Yes
00:19:00.025    Atomic Boundary Size (Normal):       0
00:19:00.025    Atomic Boundary Size (PFail):        0
00:19:00.025    Atomic Boundary Offset:              0
00:19:00.025  Maximum Single Source Range Length:    65535
00:19:00.025  Maximum Copy Length:                   65535
00:19:00.025  Maximum Source Range Count:            1
00:19:00.025  NGUID/EUI64 Never Reused:              No
00:19:00.025  Namespace Write Protected:             No
00:19:00.025  Number of LBA Formats:                 1
00:19:00.025  Current LBA Format:                    LBA Format #00
00:19:00.025  LBA Format #00: Data Size:   512  Metadata Size:     0
00:19:00.025  
00:19:00.025   00:00:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2
00:19:00.346  [2024-12-10 00:00:15.925347] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:19:05.635  Initializing NVMe Controllers
00:19:05.635  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:19:05.635  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1
00:19:05.635  Initialization complete. Launching workers.
00:19:05.635  ========================================================
00:19:05.635                                                                                                           Latency(us)
00:19:05.635  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:19:05.635  VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core  1:   39896.29     155.84    3208.17     979.55    6840.95
00:19:05.635  ========================================================
00:19:05.635  Total                                                                :   39896.29     155.84    3208.17     979.55    6840.95
00:19:05.635  
00:19:05.635  [2024-12-10 00:00:21.028427] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:19:05.635   00:00:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2
00:19:05.635  [2024-12-10 00:00:21.266181] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:19:10.905  Initializing NVMe Controllers
00:19:10.906  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:19:10.906  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1
00:19:10.906  Initialization complete. Launching workers.
00:19:10.906  ========================================================
00:19:10.906                                                                                                           Latency(us)
00:19:10.906  Device Information                                                   :       IOPS      MiB/s    Average        min        max
00:19:10.906  VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core  1:   39894.41     155.84    3208.31     985.90   10583.05
00:19:10.906  ========================================================
00:19:10.906  Total                                                                :   39894.41     155.84    3208.31     985.90   10583.05
00:19:10.906  
00:19:10.906  [2024-12-10 00:00:26.283763] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:19:10.906   00:00:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE
00:19:10.906  [2024-12-10 00:00:26.485977] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:19:16.179  [2024-12-10 00:00:31.623269] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:19:16.179  Initializing NVMe Controllers
00:19:16.179  Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:19:16.179  Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2
00:19:16.179  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1
00:19:16.179  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2
00:19:16.179  Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3
00:19:16.179  Initialization complete. Launching workers.
00:19:16.179  Starting thread on core 2
00:19:16.179  Starting thread on core 3
00:19:16.179  Starting thread on core 1
00:19:16.179   00:00:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g
00:19:16.179  [2024-12-10 00:00:31.917596] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:19:19.466  [2024-12-10 00:00:34.981387] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:19:19.466  Initializing NVMe Controllers
00:19:19.466  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:19:19.466  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:19:19.466  Associating SPDK bdev Controller (SPDK2               ) with lcore 0
00:19:19.466  Associating SPDK bdev Controller (SPDK2               ) with lcore 1
00:19:19.466  Associating SPDK bdev Controller (SPDK2               ) with lcore 2
00:19:19.466  Associating SPDK bdev Controller (SPDK2               ) with lcore 3
00:19:19.466  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration:
00:19:19.466  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1
00:19:19.466  Initialization complete. Launching workers.
00:19:19.466  Starting thread on core 1 with urgent priority queue
00:19:19.466  Starting thread on core 2 with urgent priority queue
00:19:19.466  Starting thread on core 3 with urgent priority queue
00:19:19.466  Starting thread on core 0 with urgent priority queue
00:19:19.466  SPDK bdev Controller (SPDK2               ) core 0:  8350.33 IO/s    11.98 secs/100000 ios
00:19:19.466  SPDK bdev Controller (SPDK2               ) core 1:  8262.33 IO/s    12.10 secs/100000 ios
00:19:19.466  SPDK bdev Controller (SPDK2               ) core 2:  7813.33 IO/s    12.80 secs/100000 ios
00:19:19.466  SPDK bdev Controller (SPDK2               ) core 3: 10038.33 IO/s     9.96 secs/100000 ios
00:19:19.466  ========================================================
00:19:19.466  
00:19:19.466   00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2'
00:19:19.466  [2024-12-10 00:00:35.268717] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:19:19.466  Initializing NVMe Controllers
00:19:19.466  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:19:19.466  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:19:19.466    Namespace ID: 1 size: 0GB
00:19:19.466  Initialization complete.
00:19:19.466  INFO: using host memory buffer for IO
00:19:19.466  Hello world!
00:19:19.466  [2024-12-10 00:00:35.277773] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:19:19.466   00:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2'
00:19:19.725  [2024-12-10 00:00:35.553561] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:19:21.103  Initializing NVMe Controllers
00:19:21.103  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:19:21.103  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:19:21.103  Initialization complete. Launching workers.
00:19:21.103  submit (in ns)   avg, min, max =   6812.9,   3131.4, 4001073.3
00:19:21.103  complete (in ns) avg, min, max =  20617.6,   1720.0, 4996947.6
00:19:21.103  
00:19:21.103  Submit histogram
00:19:21.103  ================
00:19:21.103         Range in us     Cumulative     Count
00:19:21.103      3.124 -     3.139:    0.0061%  (        1)
00:19:21.103      3.154 -     3.170:    0.0122%  (        1)
00:19:21.103      3.170 -     3.185:    0.0183%  (        1)
00:19:21.103      3.185 -     3.200:    0.0978%  (       13)
00:19:21.103      3.200 -     3.215:    0.5194%  (       69)
00:19:21.103      3.215 -     3.230:    1.6072%  (      178)
00:19:21.103      3.230 -     3.246:    3.5077%  (      311)
00:19:21.103      3.246 -     3.261:    7.8037%  (      703)
00:19:21.103      3.261 -     3.276:   14.3486%  (     1071)
00:19:21.103      3.276 -     3.291:   21.3517%  (     1146)
00:19:21.103      3.291 -     3.307:   29.0332%  (     1257)
00:19:21.103      3.307 -     3.322:   35.9020%  (     1124)
00:19:21.103      3.322 -     3.337:   41.3896%  (      898)
00:19:21.103      3.337 -     3.352:   46.0706%  (      766)
00:19:21.103      3.352 -     3.368:   50.3728%  (      704)
00:19:21.104      3.368 -     3.383:   54.8704%  (      736)
00:19:21.104      3.383 -     3.398:   58.6104%  (      612)
00:19:21.104      3.398 -     3.413:   64.8558%  (     1022)
00:19:21.104      3.413 -     3.429:   71.3823%  (     1068)
00:19:21.104      3.429 -     3.444:   76.5522%  (      846)
00:19:21.104      3.444 -     3.459:   81.5204%  (      813)
00:19:21.104      3.459 -     3.474:   84.6248%  (      508)
00:19:21.104      3.474 -     3.490:   86.6842%  (      337)
00:19:21.104      3.490 -     3.505:   87.6131%  (      152)
00:19:21.104      3.505 -     3.520:   88.1386%  (       86)
00:19:21.104      3.520 -     3.535:   88.4991%  (       59)
00:19:21.104      3.535 -     3.550:   89.0125%  (       84)
00:19:21.104      3.550 -     3.566:   89.9352%  (      151)
00:19:21.104      3.566 -     3.581:   90.7663%  (      136)
00:19:21.104      3.581 -     3.596:   91.8296%  (      174)
00:19:21.104      3.596 -     3.611:   92.7341%  (      148)
00:19:21.104      3.611 -     3.627:   93.4429%  (      116)
00:19:21.104      3.627 -     3.642:   94.3840%  (      154)
00:19:21.104      3.642 -     3.657:   95.1173%  (      120)
00:19:21.104      3.657 -     3.672:   96.0218%  (      148)
00:19:21.104      3.672 -     3.688:   96.7795%  (      124)
00:19:21.104      3.688 -     3.703:   97.5678%  (      129)
00:19:21.104      3.703 -     3.718:   98.0812%  (       84)
00:19:21.104      3.718 -     3.733:   98.5395%  (       75)
00:19:21.104      3.733 -     3.749:   98.9367%  (       65)
00:19:21.104      3.749 -     3.764:   99.1567%  (       36)
00:19:21.104      3.764 -     3.779:   99.3339%  (       29)
00:19:21.104      3.779 -     3.794:   99.4745%  (       23)
00:19:21.104      3.794 -     3.810:   99.5478%  (       12)
00:19:21.104      3.810 -     3.825:   99.6150%  (       11)
00:19:21.104      3.825 -     3.840:   99.6333%  (        3)
00:19:21.104      3.840 -     3.855:   99.6578%  (        4)
00:19:21.104      5.090 -     5.120:   99.6639%  (        1)
00:19:21.104      5.211 -     5.242:   99.6700%  (        1)
00:19:21.104      5.394 -     5.425:   99.6761%  (        1)
00:19:21.104      5.455 -     5.486:   99.6822%  (        1)
00:19:21.104      5.486 -     5.516:   99.6883%  (        1)
00:19:21.104      5.577 -     5.608:   99.6945%  (        1)
00:19:21.104      5.608 -     5.638:   99.7006%  (        1)
00:19:21.104      5.699 -     5.730:   99.7067%  (        1)
00:19:21.104      5.730 -     5.760:   99.7128%  (        1)
00:19:21.104      5.790 -     5.821:   99.7189%  (        1)
00:19:21.104      5.851 -     5.882:   99.7250%  (        1)
00:19:21.104      6.034 -     6.065:   99.7311%  (        1)
00:19:21.104      6.095 -     6.126:   99.7372%  (        1)
00:19:21.104      6.400 -     6.430:   99.7433%  (        1)
00:19:21.104      6.430 -     6.461:   99.7495%  (        1)
00:19:21.104      6.461 -     6.491:   99.7556%  (        1)
00:19:21.104      6.522 -     6.552:   99.7678%  (        2)
00:19:21.104      6.583 -     6.613:   99.7739%  (        1)
00:19:21.104      6.613 -     6.644:   99.7800%  (        1)
00:19:21.104      6.705 -     6.735:   99.7922%  (        2)
00:19:21.104      6.766 -     6.796:   99.7983%  (        1)
00:19:21.104      6.796 -     6.827:   99.8106%  (        2)
00:19:21.104      6.979 -     7.010:   99.8167%  (        1)
00:19:21.104      7.192 -     7.223:   99.8228%  (        1)
00:19:21.104      7.253 -     7.284:   99.8350%  (        2)
00:19:21.104      7.284 -     7.314:   99.8411%  (        1)
00:19:21.104      7.375 -     7.406:   99.8472%  (        1)
00:19:21.104      7.406 -     7.436:   99.8533%  (        1)
00:19:21.104      7.558 -     7.589:   99.8594%  (        1)
00:19:21.104      7.741 -     7.771:   99.8717%  (        2)
00:19:21.104      8.046 -     8.107:   99.8778%  (        1)
00:19:21.104      8.350 -     8.411:   99.8839%  (        1)
00:19:21.104   [2024-12-10 00:00:36.655164] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:19:21.104     8.594 -     8.655:   99.8961%  (        2)
00:19:21.104      8.838 -     8.899:   99.9022%  (        1)
00:19:21.104      9.570 -     9.630:   99.9083%  (        1)
00:19:21.104     11.215 -    11.276:   99.9144%  (        1)
00:19:21.104   3994.575 -  4025.783:  100.0000%  (       14)
00:19:21.104  
00:19:21.104  Complete histogram
00:19:21.104  ==================
00:19:21.104         Range in us     Cumulative     Count
00:19:21.104      1.714 -     1.722:    0.0061%  (        1)
00:19:21.104      1.722 -     1.730:    0.0122%  (        1)
00:19:21.104      1.737 -     1.745:    0.0183%  (        1)
00:19:21.104      1.745 -     1.752:    0.0244%  (        1)
00:19:21.104      1.752 -     1.760:    0.0367%  (        2)
00:19:21.104      1.760 -     1.768:    0.1711%  (       22)
00:19:21.104      1.768 -     1.775:    0.8922%  (      118)
00:19:21.104      1.775 -     1.783:    2.9760%  (      341)
00:19:21.104      1.783 -     1.790:    5.3593%  (      390)
00:19:21.104      1.790 -     1.798:    6.8382%  (      242)
00:19:21.104      1.798 -     1.806:    8.3781%  (      252)
00:19:21.104      1.806 -     1.813:   14.3975%  (      985)
00:19:21.104      1.813 -     1.821:   37.9858%  (     3860)
00:19:21.104      1.821 -     1.829:   68.8707%  (     5054)
00:19:21.104      1.829 -     1.836:   85.6698%  (     2749)
00:19:21.104      1.836 -     1.844:   91.4263%  (      942)
00:19:21.104      1.844 -     1.851:   94.1640%  (      448)
00:19:21.104      1.851 -     1.859:   95.6612%  (      245)
00:19:21.104      1.859 -     1.867:   96.3212%  (      108)
00:19:21.104      1.867 -     1.874:   96.6390%  (       52)
00:19:21.104      1.874 -     1.882:   97.0728%  (       71)
00:19:21.104      1.882 -     1.890:   97.5617%  (       80)
00:19:21.104      1.890 -     1.897:   98.0750%  (       84)
00:19:21.104      1.897 -     1.905:   98.5395%  (       76)
00:19:21.104      1.905 -     1.912:   98.9184%  (       62)
00:19:21.104      1.912 -     1.920:   99.1689%  (       41)
00:19:21.104      1.920 -     1.928:   99.2606%  (       15)
00:19:21.104      1.928 -     1.935:   99.2911%  (        5)
00:19:21.104      1.935 -     1.943:   99.2972%  (        1)
00:19:21.104      1.966 -     1.981:   99.3033%  (        1)
00:19:21.104      2.011 -     2.027:   99.3095%  (        1)
00:19:21.104      3.291 -     3.307:   99.3156%  (        1)
00:19:21.104      3.718 -     3.733:   99.3217%  (        1)
00:19:21.104      3.733 -     3.749:   99.3278%  (        1)
00:19:21.104      3.794 -     3.810:   99.3339%  (        1)
00:19:21.104      3.810 -     3.825:   99.3400%  (        1)
00:19:21.104      3.855 -     3.870:   99.3461%  (        1)
00:19:21.104      3.901 -     3.931:   99.3522%  (        1)
00:19:21.104      3.962 -     3.992:   99.3645%  (        2)
00:19:21.104      4.023 -     4.053:   99.3706%  (        1)
00:19:21.104      4.145 -     4.175:   99.3767%  (        1)
00:19:21.104      4.602 -     4.632:   99.3828%  (        1)
00:19:21.104      4.785 -     4.815:   99.3889%  (        1)
00:19:21.104      4.876 -     4.907:   99.4072%  (        3)
00:19:21.104      5.120 -     5.150:   99.4133%  (        1)
00:19:21.104      5.303 -     5.333:   99.4256%  (        2)
00:19:21.104      5.486 -     5.516:   99.4317%  (        1)
00:19:21.104      5.638 -     5.669:   99.4378%  (        1)
00:19:21.104      5.730 -     5.760:   99.4439%  (        1)
00:19:21.104      5.790 -     5.821:   99.4500%  (        1)
00:19:21.104      6.034 -     6.065:   99.4561%  (        1)
00:19:21.104      6.126 -     6.156:   99.4622%  (        1)
00:19:21.104      6.248 -     6.278:   99.4683%  (        1)
00:19:21.104      6.309 -     6.339:   99.4745%  (        1)
00:19:21.104      6.644 -     6.674:   99.4806%  (        1)
00:19:21.104      6.705 -     6.735:   99.4867%  (        1)
00:19:21.104      6.796 -     6.827:   99.4928%  (        1)
00:19:21.104      6.857 -     6.888:   99.4989%  (        1)
00:19:21.104      7.345 -     7.375:   99.5050%  (        1)
00:19:21.104      7.650 -     7.680:   99.5111%  (        1)
00:19:21.104      7.985 -     8.046:   99.5172%  (        1)
00:19:21.104      8.350 -     8.411:   99.5233%  (        1)
00:19:21.104      8.716 -     8.777:   99.5295%  (        1)
00:19:21.104   2824.290 -  2839.893:   99.5356%  (        1)
00:19:21.104   3994.575 -  4025.783:   99.9939%  (       75)
00:19:21.104   4993.219 -  5024.427:  100.0000%  (        1)
00:19:21.104  
00:19:21.104   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2
00:19:21.104   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2
00:19:21.104   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2
00:19:21.104   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4
00:19:21.104   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems
00:19:21.104  [
00:19:21.104    {
00:19:21.104      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:19:21.104      "subtype": "Discovery",
00:19:21.104      "listen_addresses": [],
00:19:21.104      "allow_any_host": true,
00:19:21.104      "hosts": []
00:19:21.104    },
00:19:21.104    {
00:19:21.104      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:19:21.104      "subtype": "NVMe",
00:19:21.104      "listen_addresses": [
00:19:21.104        {
00:19:21.104          "trtype": "VFIOUSER",
00:19:21.105          "adrfam": "IPv4",
00:19:21.105          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:19:21.105          "trsvcid": "0"
00:19:21.105        }
00:19:21.105      ],
00:19:21.105      "allow_any_host": true,
00:19:21.105      "hosts": [],
00:19:21.105      "serial_number": "SPDK1",
00:19:21.105      "model_number": "SPDK bdev Controller",
00:19:21.105      "max_namespaces": 32,
00:19:21.105      "min_cntlid": 1,
00:19:21.105      "max_cntlid": 65519,
00:19:21.105      "namespaces": [
00:19:21.105        {
00:19:21.105          "nsid": 1,
00:19:21.105          "bdev_name": "Malloc1",
00:19:21.105          "name": "Malloc1",
00:19:21.105          "nguid": "BB6029EA4E674629A5C816EFB5D2CD88",
00:19:21.105          "uuid": "bb6029ea-4e67-4629-a5c8-16efb5d2cd88"
00:19:21.105        },
00:19:21.105        {
00:19:21.105          "nsid": 2,
00:19:21.105          "bdev_name": "Malloc3",
00:19:21.105          "name": "Malloc3",
00:19:21.105          "nguid": "DB75ECA14DD94882A22393FC97EEF242",
00:19:21.105          "uuid": "db75eca1-4dd9-4882-a223-93fc97eef242"
00:19:21.105        }
00:19:21.105      ]
00:19:21.105    },
00:19:21.105    {
00:19:21.105      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:19:21.105      "subtype": "NVMe",
00:19:21.105      "listen_addresses": [
00:19:21.105        {
00:19:21.105          "trtype": "VFIOUSER",
00:19:21.105          "adrfam": "IPv4",
00:19:21.105          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:19:21.105          "trsvcid": "0"
00:19:21.105        }
00:19:21.105      ],
00:19:21.105      "allow_any_host": true,
00:19:21.105      "hosts": [],
00:19:21.105      "serial_number": "SPDK2",
00:19:21.105      "model_number": "SPDK bdev Controller",
00:19:21.105      "max_namespaces": 32,
00:19:21.105      "min_cntlid": 1,
00:19:21.105      "max_cntlid": 65519,
00:19:21.105      "namespaces": [
00:19:21.105        {
00:19:21.105          "nsid": 1,
00:19:21.105          "bdev_name": "Malloc2",
00:19:21.105          "name": "Malloc2",
00:19:21.105          "nguid": "05951107F6A4453CAADFDA297DCF898E",
00:19:21.105          "uuid": "05951107-f6a4-453c-aadf-da297dcf898e"
00:19:21.105        }
00:19:21.105      ]
00:19:21.105    }
00:19:21.105  ]
00:19:21.105   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file
00:19:21.105   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r '		trtype:VFIOUSER 		traddr:/var/run/vfio-user/domain/vfio-user2/2 		subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file
00:19:21.105   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3061477
00:19:21.105   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file
00:19:21.105   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0
00:19:21.105   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:19:21.105   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:19:21.105   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0
00:19:21.105   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file
00:19:21.105   00:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4
00:19:21.364  [2024-12-10 00:00:37.048596] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller
00:19:21.364  Malloc4
00:19:21.364   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2
00:19:21.622  [2024-12-10 00:00:37.307406] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller
00:19:21.622   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems
00:19:21.622  Asynchronous Event Request test
00:19:21.622  Attaching to /var/run/vfio-user/domain/vfio-user2/2
00:19:21.622  Attached to /var/run/vfio-user/domain/vfio-user2/2
00:19:21.622  Registering asynchronous event callbacks...
00:19:21.622  Starting namespace attribute notice tests for all controllers...
00:19:21.622  /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00
00:19:21.622  aer_cb - Changed Namespace
00:19:21.622  Cleaning up...
00:19:21.882  [
00:19:21.882    {
00:19:21.882      "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:19:21.882      "subtype": "Discovery",
00:19:21.882      "listen_addresses": [],
00:19:21.882      "allow_any_host": true,
00:19:21.882      "hosts": []
00:19:21.882    },
00:19:21.882    {
00:19:21.882      "nqn": "nqn.2019-07.io.spdk:cnode1",
00:19:21.882      "subtype": "NVMe",
00:19:21.882      "listen_addresses": [
00:19:21.882        {
00:19:21.882          "trtype": "VFIOUSER",
00:19:21.882          "adrfam": "IPv4",
00:19:21.882          "traddr": "/var/run/vfio-user/domain/vfio-user1/1",
00:19:21.882          "trsvcid": "0"
00:19:21.882        }
00:19:21.882      ],
00:19:21.882      "allow_any_host": true,
00:19:21.882      "hosts": [],
00:19:21.882      "serial_number": "SPDK1",
00:19:21.882      "model_number": "SPDK bdev Controller",
00:19:21.882      "max_namespaces": 32,
00:19:21.882      "min_cntlid": 1,
00:19:21.882      "max_cntlid": 65519,
00:19:21.882      "namespaces": [
00:19:21.882        {
00:19:21.882          "nsid": 1,
00:19:21.882          "bdev_name": "Malloc1",
00:19:21.882          "name": "Malloc1",
00:19:21.882          "nguid": "BB6029EA4E674629A5C816EFB5D2CD88",
00:19:21.882          "uuid": "bb6029ea-4e67-4629-a5c8-16efb5d2cd88"
00:19:21.882        },
00:19:21.882        {
00:19:21.882          "nsid": 2,
00:19:21.882          "bdev_name": "Malloc3",
00:19:21.882          "name": "Malloc3",
00:19:21.882          "nguid": "DB75ECA14DD94882A22393FC97EEF242",
00:19:21.882          "uuid": "db75eca1-4dd9-4882-a223-93fc97eef242"
00:19:21.882        }
00:19:21.882      ]
00:19:21.882    },
00:19:21.882    {
00:19:21.882      "nqn": "nqn.2019-07.io.spdk:cnode2",
00:19:21.882      "subtype": "NVMe",
00:19:21.882      "listen_addresses": [
00:19:21.882        {
00:19:21.882          "trtype": "VFIOUSER",
00:19:21.882          "adrfam": "IPv4",
00:19:21.882          "traddr": "/var/run/vfio-user/domain/vfio-user2/2",
00:19:21.882          "trsvcid": "0"
00:19:21.882        }
00:19:21.882      ],
00:19:21.882      "allow_any_host": true,
00:19:21.882      "hosts": [],
00:19:21.882      "serial_number": "SPDK2",
00:19:21.882      "model_number": "SPDK bdev Controller",
00:19:21.882      "max_namespaces": 32,
00:19:21.882      "min_cntlid": 1,
00:19:21.882      "max_cntlid": 65519,
00:19:21.882      "namespaces": [
00:19:21.882        {
00:19:21.882          "nsid": 1,
00:19:21.882          "bdev_name": "Malloc2",
00:19:21.882          "name": "Malloc2",
00:19:21.882          "nguid": "05951107F6A4453CAADFDA297DCF898E",
00:19:21.882          "uuid": "05951107-f6a4-453c-aadf-da297dcf898e"
00:19:21.882        },
00:19:21.882        {
00:19:21.882          "nsid": 2,
00:19:21.882          "bdev_name": "Malloc4",
00:19:21.882          "name": "Malloc4",
00:19:21.882          "nguid": "73976876F3A5428EB73CD6E577A811C3",
00:19:21.882          "uuid": "73976876-f3a5-428e-b73c-d6e577a811c3"
00:19:21.882        }
00:19:21.882      ]
00:19:21.882    }
00:19:21.882  ]
00:19:21.882   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3061477
00:19:21.882   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user
00:19:21.882   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3053116
00:19:21.882   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3053116 ']'
00:19:21.882   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3053116
00:19:21.882    00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname
00:19:21.882   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:21.882    00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3053116
00:19:21.882   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:21.882   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:21.882   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3053116'
00:19:21.882  killing process with pid 3053116
00:19:21.882   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3053116
00:19:21.882   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3053116
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I'
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I'
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3061495
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3061495'
00:19:22.141  Process pid: 3061495
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3061495
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3061495 ']'
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:22.141  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:22.141   00:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x
00:19:22.141  [2024-12-10 00:00:37.864245] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:19:22.141  [2024-12-10 00:00:37.865126] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:19:22.141  [2024-12-10 00:00:37.865173] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:19:22.141  [2024-12-10 00:00:37.937019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:19:22.141  [2024-12-10 00:00:37.977389] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:19:22.141  [2024-12-10 00:00:37.977426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:19:22.141  [2024-12-10 00:00:37.977433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:19:22.141  [2024-12-10 00:00:37.977440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:19:22.141  [2024-12-10 00:00:37.977444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:19:22.141  [2024-12-10 00:00:37.978892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:19:22.141  [2024-12-10 00:00:37.979215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:19:22.141  [2024-12-10 00:00:37.979240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:22.141  [2024-12-10 00:00:37.979240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:19:22.400  [2024-12-10 00:00:38.047135] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:19:22.400  [2024-12-10 00:00:38.048410] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:19:22.400  [2024-12-10 00:00:38.048494] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:19:22.400  [2024-12-10 00:00:38.048864] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:19:22.400  [2024-12-10 00:00:38.048901] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:19:22.400   00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:22.400   00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0
00:19:22.400   00:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1
00:19:23.338   00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I
00:19:23.597   00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user
00:19:23.597    00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2
00:19:23.597   00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:19:23.597   00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1
00:19:23.597   00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:19:23.856  Malloc1
00:19:23.856   00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1
00:19:24.114   00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1
00:19:24.114   00:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0
00:19:24.373   00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES)
00:19:24.373   00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2
00:19:24.373   00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2
00:19:24.631  Malloc2
00:19:24.631   00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2
00:19:24.888   00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2
00:19:25.146   00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0
00:19:25.146   00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user
00:19:25.146   00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3061495
00:19:25.146   00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3061495 ']'
00:19:25.146   00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3061495
00:19:25.146    00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname
00:19:25.146   00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:25.146    00:00:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3061495
00:19:25.406   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:25.406   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:25.406   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3061495'
00:19:25.406  killing process with pid 3061495
00:19:25.406   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3061495
00:19:25.406   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3061495
00:19:25.406   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user
00:19:25.406   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT
00:19:25.406  
00:19:25.406  real	0m51.759s
00:19:25.406  user	3m20.199s
00:19:25.406  sys	0m3.261s
00:19:25.406   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:25.406   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x
00:19:25.406  ************************************
00:19:25.406  END TEST nvmf_vfio_user
00:19:25.406  ************************************
00:19:25.406   00:00:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp
00:19:25.406   00:00:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:19:25.406   00:00:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:25.406   00:00:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:19:25.666  ************************************
00:19:25.666  START TEST nvmf_vfio_user_nvme_compliance
00:19:25.666  ************************************
00:19:25.666   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp
00:19:25.666  * Looking for test storage...
00:19:25.666  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:19:25.666     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version
00:19:25.666     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-:
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-:
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<'
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:25.666     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1
00:19:25.666     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1
00:19:25.666     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:25.666     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1
00:19:25.666     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2
00:19:25.666     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2
00:19:25.666     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:25.666     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:19:25.666  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:25.666  		--rc genhtml_branch_coverage=1
00:19:25.666  		--rc genhtml_function_coverage=1
00:19:25.666  		--rc genhtml_legend=1
00:19:25.666  		--rc geninfo_all_blocks=1
00:19:25.666  		--rc geninfo_unexecuted_blocks=1
00:19:25.666  		
00:19:25.666  		'
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:19:25.666  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:25.666  		--rc genhtml_branch_coverage=1
00:19:25.666  		--rc genhtml_function_coverage=1
00:19:25.666  		--rc genhtml_legend=1
00:19:25.666  		--rc geninfo_all_blocks=1
00:19:25.666  		--rc geninfo_unexecuted_blocks=1
00:19:25.666  		
00:19:25.666  		'
00:19:25.666    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:19:25.666  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:25.666  		--rc genhtml_branch_coverage=1
00:19:25.667  		--rc genhtml_function_coverage=1
00:19:25.667  		--rc genhtml_legend=1
00:19:25.667  		--rc geninfo_all_blocks=1
00:19:25.667  		--rc geninfo_unexecuted_blocks=1
00:19:25.667  		
00:19:25.667  		'
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:19:25.667  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:25.667  		--rc genhtml_branch_coverage=1
00:19:25.667  		--rc genhtml_function_coverage=1
00:19:25.667  		--rc genhtml_legend=1
00:19:25.667  		--rc geninfo_all_blocks=1
00:19:25.667  		--rc geninfo_unexecuted_blocks=1
00:19:25.667  		
00:19:25.667  		'
00:19:25.667   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:19:25.667     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:19:25.667     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:19:25.667     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob
00:19:25.667     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:19:25.667     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:19:25.667     00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:19:25.667      00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:25.667      00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:25.667      00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:25.667      00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH
00:19:25.667      00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:19:25.667  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:19:25.667    00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0
00:19:25.667   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64
00:19:25.667   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:19:25.667   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER
00:19:25.667   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER
00:19:25.667   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user
00:19:25.667   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3062245
00:19:25.667   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3062245'
00:19:25.667  Process pid: 3062245
00:19:25.667   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:19:25.667   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7
00:19:25.667   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3062245
00:19:25.667   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3062245 ']'
00:19:25.668   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:25.668   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:25.668   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:25.668  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:25.668   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:25.668   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:19:25.927  [2024-12-10 00:00:41.536610] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:19:25.927  [2024-12-10 00:00:41.536656] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:19:25.927  [2024-12-10 00:00:41.609906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:19:25.927  [2024-12-10 00:00:41.649630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:19:25.927  [2024-12-10 00:00:41.649664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:19:25.927  [2024-12-10 00:00:41.649670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:19:25.927  [2024-12-10 00:00:41.649676] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:19:25.927  [2024-12-10 00:00:41.649681] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:19:25.927  [2024-12-10 00:00:41.650929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:19:25.927  [2024-12-10 00:00:41.651039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:19:25.927  [2024-12-10 00:00:41.651040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:19:25.927   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:25.927   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0
00:19:25.927   00:00:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:19:27.304  malloc0
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:27.304   00:00:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0'
00:19:27.304  
00:19:27.304  
00:19:27.304       CUnit - A unit testing framework for C - Version 2.1-3
00:19:27.304       http://cunit.sourceforge.net/
00:19:27.304  
00:19:27.304  
00:19:27.304  Suite: nvme_compliance
00:19:27.304    Test: admin_identify_ctrlr_verify_dptr ...[2024-12-10 00:00:42.989564] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:27.304  [2024-12-10 00:00:42.990891] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining
00:19:27.304  [2024-12-10 00:00:42.990906] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed
00:19:27.304  [2024-12-10 00:00:42.990912] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed
00:19:27.304  [2024-12-10 00:00:42.994587] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:27.304  passed
00:19:27.304    Test: admin_identify_ctrlr_verify_fused ...[2024-12-10 00:00:43.067109] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:27.304  [2024-12-10 00:00:43.072139] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:27.304  passed
00:19:27.304    Test: admin_identify_ns ...[2024-12-10 00:00:43.151429] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:27.563  [2024-12-10 00:00:43.212176] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:19:27.563  [2024-12-10 00:00:43.220181] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295
00:19:27.563  [2024-12-10 00:00:43.241260] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:27.563  passed
00:19:27.563    Test: admin_get_features_mandatory_features ...[2024-12-10 00:00:43.315149] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:27.563  [2024-12-10 00:00:43.318174] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:27.563  passed
00:19:27.563    Test: admin_get_features_optional_features ...[2024-12-10 00:00:43.395677] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:27.563  [2024-12-10 00:00:43.398699] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:27.821  passed
00:19:27.821    Test: admin_set_features_number_of_queues ...[2024-12-10 00:00:43.477467] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:27.821  [2024-12-10 00:00:43.583263] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:27.821  passed
00:19:27.821    Test: admin_get_log_page_mandatory_logs ...[2024-12-10 00:00:43.656974] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:27.821  [2024-12-10 00:00:43.659993] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:28.086  passed
00:19:28.086    Test: admin_get_log_page_with_lpo ...[2024-12-10 00:00:43.736658] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:28.086  [2024-12-10 00:00:43.808179] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512)
00:19:28.086  [2024-12-10 00:00:43.821235] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:28.086  passed
00:19:28.086    Test: fabric_property_get ...[2024-12-10 00:00:43.893967] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:28.086  [2024-12-10 00:00:43.895199] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed
00:19:28.086  [2024-12-10 00:00:43.898998] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:28.086  passed
00:19:28.346    Test: admin_delete_io_sq_use_admin_qid ...[2024-12-10 00:00:43.977554] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:28.346  [2024-12-10 00:00:43.978788] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist
00:19:28.346  [2024-12-10 00:00:43.980577] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:28.346  passed
00:19:28.346    Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-10 00:00:44.053270] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:28.346  [2024-12-10 00:00:44.140176] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist
00:19:28.346  [2024-12-10 00:00:44.156186] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist
00:19:28.346  [2024-12-10 00:00:44.161255] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:28.346  passed
00:19:28.604    Test: admin_delete_io_cq_use_admin_qid ...[2024-12-10 00:00:44.237922] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:28.604  [2024-12-10 00:00:44.239165] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist
00:19:28.604  [2024-12-10 00:00:44.240953] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:28.604  passed
00:19:28.604    Test: admin_delete_io_cq_delete_cq_first ...[2024-12-10 00:00:44.315618] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:28.604  [2024-12-10 00:00:44.394183] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first
00:19:28.604  [2024-12-10 00:00:44.419182] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist
00:19:28.604  [2024-12-10 00:00:44.424254] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:28.604  passed
00:19:28.863    Test: admin_create_io_cq_verify_iv_pc ...[2024-12-10 00:00:44.496963] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:28.863  [2024-12-10 00:00:44.498203] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big
00:19:28.863  [2024-12-10 00:00:44.498226] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported
00:19:28.863  [2024-12-10 00:00:44.501995] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:28.863  passed
00:19:28.863    Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-10 00:00:44.576699] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:28.863  [2024-12-10 00:00:44.668198] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1
00:19:28.863  [2024-12-10 00:00:44.676173] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257
00:19:28.863  [2024-12-10 00:00:44.684185] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0
00:19:28.863  [2024-12-10 00:00:44.692176] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128
00:19:29.122  [2024-12-10 00:00:44.721270] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:29.122  passed
00:19:29.122    Test: admin_create_io_sq_verify_pc ...[2024-12-10 00:00:44.800729] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:29.122  [2024-12-10 00:00:44.821182] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported
00:19:29.122  [2024-12-10 00:00:44.838911] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:29.122  passed
00:19:29.122    Test: admin_create_io_qp_max_qps ...[2024-12-10 00:00:44.915452] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:30.496  [2024-12-10 00:00:46.021175] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs
00:19:30.754  [2024-12-10 00:00:46.399461] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:30.754  passed
00:19:30.754    Test: admin_create_io_sq_shared_cq ...[2024-12-10 00:00:46.477421] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller
00:19:31.017  [2024-12-10 00:00:46.614170] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first
00:19:31.017  [2024-12-10 00:00:46.651231] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller
00:19:31.017  passed
00:19:31.017  
00:19:31.017  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:19:31.017                suites      1      1    n/a      0        0
00:19:31.017                 tests     18     18     18      0        0
00:19:31.017               asserts    360    360    360      0      n/a
00:19:31.017  
00:19:31.017  Elapsed time =    1.507 seconds
00:19:31.017   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3062245
00:19:31.017   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3062245 ']'
00:19:31.017   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3062245
00:19:31.017    00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname
00:19:31.017   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:19:31.017    00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3062245
00:19:31.017   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:19:31.017   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:19:31.017   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3062245'
00:19:31.017  killing process with pid 3062245
00:19:31.017   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3062245
00:19:31.017   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3062245
00:19:31.280   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user
00:19:31.280   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT
00:19:31.280  
00:19:31.280  real	0m5.647s
00:19:31.280  user	0m15.807s
00:19:31.280  sys	0m0.497s
00:19:31.281   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable
00:19:31.281   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x
00:19:31.281  ************************************
00:19:31.281  END TEST nvmf_vfio_user_nvme_compliance
00:19:31.281  ************************************
00:19:31.281   00:00:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp
00:19:31.281   00:00:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:19:31.281   00:00:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:19:31.281   00:00:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:19:31.281  ************************************
00:19:31.281  START TEST nvmf_vfio_user_fuzz
00:19:31.281  ************************************
00:19:31.281   00:00:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp
00:19:31.281  * Looking for test storage...
00:19:31.281  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:19:31.281    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:19:31.281     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version
00:19:31.281     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-:
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-:
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<'
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in
00:19:31.540    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 ))
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:19:31.541  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:31.541  		--rc genhtml_branch_coverage=1
00:19:31.541  		--rc genhtml_function_coverage=1
00:19:31.541  		--rc genhtml_legend=1
00:19:31.541  		--rc geninfo_all_blocks=1
00:19:31.541  		--rc geninfo_unexecuted_blocks=1
00:19:31.541  		
00:19:31.541  		'
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:19:31.541  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:31.541  		--rc genhtml_branch_coverage=1
00:19:31.541  		--rc genhtml_function_coverage=1
00:19:31.541  		--rc genhtml_legend=1
00:19:31.541  		--rc geninfo_all_blocks=1
00:19:31.541  		--rc geninfo_unexecuted_blocks=1
00:19:31.541  		
00:19:31.541  		'
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:19:31.541  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:31.541  		--rc genhtml_branch_coverage=1
00:19:31.541  		--rc genhtml_function_coverage=1
00:19:31.541  		--rc genhtml_legend=1
00:19:31.541  		--rc geninfo_all_blocks=1
00:19:31.541  		--rc geninfo_unexecuted_blocks=1
00:19:31.541  		
00:19:31.541  		'
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:19:31.541  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:19:31.541  		--rc genhtml_branch_coverage=1
00:19:31.541  		--rc genhtml_function_coverage=1
00:19:31.541  		--rc genhtml_legend=1
00:19:31.541  		--rc geninfo_all_blocks=1
00:19:31.541  		--rc geninfo_unexecuted_blocks=1
00:19:31.541  		
00:19:31.541  		'
00:19:31.541   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:19:31.541     00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:19:31.541      00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:31.541      00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:31.541      00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:31.541      00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH
00:19:31.541      00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:19:31.541  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:19:31.541    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:19:31.542    00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3063207
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3063207'
00:19:31.542  Process pid: 3063207
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3063207
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3063207 ']'
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:19:31.542  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable
00:19:31.542   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:19:31.801   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:19:31.801   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0
00:19:31.801   00:00:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:19:32.736  malloc0
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user'
00:19:32.736   00:00:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a
00:20:04.814  Fuzzing completed. Shutting down the fuzz application
00:20:04.814  
00:20:04.814  Dumping successful admin opcodes:
00:20:04.814  9, 10, 
00:20:04.814  Dumping successful io opcodes:
00:20:04.814  0, 
00:20:04.814  NS: 0x20000081ef00 I/O qp, Total commands completed: 1005946, total successful commands: 3943, random_seed: 2338118848
00:20:04.814  NS: 0x20000081ef00 admin qp, Total commands completed: 243216, total successful commands: 57, random_seed: 3004952576
00:20:04.814   00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0
00:20:04.814   00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:04.814   00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:20:04.814   00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:04.814   00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3063207
00:20:04.814   00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3063207 ']'
00:20:04.814   00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3063207
00:20:04.814    00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname
00:20:04.814   00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:20:04.814    00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3063207
00:20:04.814   00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:20:04.814   00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:20:04.814   00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3063207'
00:20:04.814  killing process with pid 3063207
00:20:04.814   00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3063207
00:20:04.814   00:01:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3063207
00:20:04.814   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt
00:20:04.814   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT
00:20:04.814  
00:20:04.814  real	0m32.211s
00:20:04.814  user	0m29.469s
00:20:04.814  sys	0m31.494s
00:20:04.815   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable
00:20:04.815   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x
00:20:04.815  ************************************
00:20:04.815  END TEST nvmf_vfio_user_fuzz
00:20:04.815  ************************************
00:20:04.815   00:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp
00:20:04.815   00:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:20:04.815   00:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:20:04.815   00:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:20:04.815  ************************************
00:20:04.815  START TEST nvmf_auth_target
00:20:04.815  ************************************
00:20:04.815   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp
00:20:04.815  * Looking for test storage...
00:20:04.815  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:20:04.815     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version
00:20:04.815     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-:
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-:
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<'
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:20:04.815     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1
00:20:04.815     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1
00:20:04.815     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:20:04.815     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1
00:20:04.815     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2
00:20:04.815     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2
00:20:04.815     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:20:04.815     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:20:04.815  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:04.815  		--rc genhtml_branch_coverage=1
00:20:04.815  		--rc genhtml_function_coverage=1
00:20:04.815  		--rc genhtml_legend=1
00:20:04.815  		--rc geninfo_all_blocks=1
00:20:04.815  		--rc geninfo_unexecuted_blocks=1
00:20:04.815  		
00:20:04.815  		'
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:20:04.815  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:04.815  		--rc genhtml_branch_coverage=1
00:20:04.815  		--rc genhtml_function_coverage=1
00:20:04.815  		--rc genhtml_legend=1
00:20:04.815  		--rc geninfo_all_blocks=1
00:20:04.815  		--rc geninfo_unexecuted_blocks=1
00:20:04.815  		
00:20:04.815  		'
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:20:04.815  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:04.815  		--rc genhtml_branch_coverage=1
00:20:04.815  		--rc genhtml_function_coverage=1
00:20:04.815  		--rc genhtml_legend=1
00:20:04.815  		--rc geninfo_all_blocks=1
00:20:04.815  		--rc geninfo_unexecuted_blocks=1
00:20:04.815  		
00:20:04.815  		'
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:20:04.815  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:20:04.815  		--rc genhtml_branch_coverage=1
00:20:04.815  		--rc genhtml_function_coverage=1
00:20:04.815  		--rc genhtml_legend=1
00:20:04.815  		--rc geninfo_all_blocks=1
00:20:04.815  		--rc geninfo_unexecuted_blocks=1
00:20:04.815  		
00:20:04.815  		'
00:20:04.815   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:20:04.815     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:20:04.815     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:20:04.815    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:20:04.816     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob
00:20:04.816     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:20:04.816     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:20:04.816     00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:20:04.816      00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:04.816      00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:04.816      00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:04.816      00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH
00:20:04.816      00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:20:04.816  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512")
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192")
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=()
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=()
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:20:04.816    00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable
00:20:04.816   00:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=()
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=()
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=()
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=()
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=()
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=()
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=()
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:20:10.093  Found 0000:af:00.0 (0x8086 - 0x159b)
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:20:10.093  Found 0000:af:00.1 (0x8086 - 0x159b)
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:20:10.093  Found net devices under 0000:af:00.0: cvl_0_0
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:20:10.093   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:20:10.094  Found net devices under 0000:af:00.1: cvl_0_1
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:20:10.094  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:20:10.094  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms
00:20:10.094  
00:20:10.094  --- 10.0.0.2 ping statistics ---
00:20:10.094  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:10.094  rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:20:10.094  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:20:10.094  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms
00:20:10.094  
00:20:10.094  --- 10.0.0.1 ping statistics ---
00:20:10.094  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:20:10.094  rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3071531
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3071531
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3071531 ']'
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3071550
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth
00:20:10.094   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT
00:20:10.094    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48
00:20:10.094    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:10.094    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:10.094    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:10.094    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null
00:20:10.094    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48
00:20:10.094     00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:20:10.094    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a67177c9f4e1b9a6c5d8f5c62009a0d95d04e16a8ed26260
00:20:10.094     00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:20:10.094    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.i8m
00:20:10.094    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a67177c9f4e1b9a6c5d8f5c62009a0d95d04e16a8ed26260 0
00:20:10.094    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a67177c9f4e1b9a6c5d8f5c62009a0d95d04e16a8ed26260 0
00:20:10.094    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:10.094    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a67177c9f4e1b9a6c5d8f5c62009a0d95d04e16a8ed26260
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.i8m
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.i8m
00:20:10.095   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.i8m
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64
00:20:10.095     00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=26ae533b2a7c515ea65f8b02db72ea5c74cd694d8892dbd4da3c8715dab782f3
00:20:10.095     00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.KlG
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 26ae533b2a7c515ea65f8b02db72ea5c74cd694d8892dbd4da3c8715dab782f3 3
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 26ae533b2a7c515ea65f8b02db72ea5c74cd694d8892dbd4da3c8715dab782f3 3
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=26ae533b2a7c515ea65f8b02db72ea5c74cd694d8892dbd4da3c8715dab782f3
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.KlG
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.KlG
00:20:10.095   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.KlG
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32
00:20:10.095     00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a0d1357769556540f7475d3a2bde2569
00:20:10.095     00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.hBX
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a0d1357769556540f7475d3a2bde2569 1
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a0d1357769556540f7475d3a2bde2569 1
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a0d1357769556540f7475d3a2bde2569
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.hBX
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.hBX
00:20:10.095   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.hBX
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48
00:20:10.095     00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c5dac3d8c77b547949c8c712137778738836f1aeccf07f4d
00:20:10.095     00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.z0d
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c5dac3d8c77b547949c8c712137778738836f1aeccf07f4d 2
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c5dac3d8c77b547949c8c712137778738836f1aeccf07f4d 2
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c5dac3d8c77b547949c8c712137778738836f1aeccf07f4d
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2
00:20:10.095    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.z0d
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.z0d
00:20:10.359   00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.z0d
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48
00:20:10.359     00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=58874eebe2dce4c3f4301ce6f018cb5cd7850855ed2f0e33
00:20:10.359     00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.80f
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 58874eebe2dce4c3f4301ce6f018cb5cd7850855ed2f0e33 2
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 58874eebe2dce4c3f4301ce6f018cb5cd7850855ed2f0e33 2
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=58874eebe2dce4c3f4301ce6f018cb5cd7850855ed2f0e33
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2
00:20:10.359    00:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.80f
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.80f
00:20:10.359   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.80f
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32
00:20:10.359     00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cb7767d2b7ce16af9ba872e3a1b61f5c
00:20:10.359     00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.mDM
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cb7767d2b7ce16af9ba872e3a1b61f5c 1
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cb7767d2b7ce16af9ba872e3a1b61f5c 1
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cb7767d2b7ce16af9ba872e3a1b61f5c
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.mDM
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.mDM
00:20:10.359   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.mDM
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64
00:20:10.359     00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e6e09ae34ca654be0396f20390cd079051f466ca5b2d9d0d0810eda155d459e3
00:20:10.359     00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.nod
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e6e09ae34ca654be0396f20390cd079051f466ca5b2d9d0d0810eda155d459e3 3
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e6e09ae34ca654be0396f20390cd079051f466ca5b2d9d0d0810eda155d459e3 3
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e6e09ae34ca654be0396f20390cd079051f466ca5b2d9d0d0810eda155d459e3
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python -
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.nod
00:20:10.359    00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.nod
00:20:10.359   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.nod
00:20:10.360   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]=
00:20:10.360   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3071531
00:20:10.360   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3071531 ']'
00:20:10.360   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:20:10.360   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:10.360   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:20:10.360  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:20:10.360   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:10.360   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:10.618   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:10.618   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:20:10.618   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3071550 /var/tmp/host.sock
00:20:10.618   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3071550 ']'
00:20:10.618   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock
00:20:10.618   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:20:10.618   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...'
00:20:10.618  Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...
00:20:10.618   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:20:10.618   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:10.877   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:20:10.877   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:20:10.877   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd
00:20:10.877   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:10.877   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:10.877   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:10.877   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:20:10.877   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.i8m
00:20:10.877   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:10.877   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:10.877   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:10.877   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.i8m
00:20:10.877   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.i8m
00:20:11.136   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.KlG ]]
00:20:11.136   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KlG
00:20:11.136   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:11.136   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:11.136   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:11.136   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KlG
00:20:11.136   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KlG
00:20:11.136   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:20:11.136   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.hBX
00:20:11.136   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:11.136   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:11.394   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:11.394   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.hBX
00:20:11.394   00:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.hBX
00:20:11.394   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.z0d ]]
00:20:11.394   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z0d
00:20:11.395   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:11.395   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:11.395   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:11.395   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z0d
00:20:11.395   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z0d
00:20:11.653   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:20:11.654   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.80f
00:20:11.654   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:11.654   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:11.654   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:11.654   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.80f
00:20:11.654   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.80f
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.mDM ]]
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mDM
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mDM
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mDM
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}"
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.nod
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.nod
00:20:11.913   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.nod
00:20:12.171   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]]
00:20:12.171   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}"
00:20:12.171   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:12.171   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:12.171   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:20:12.171   00:01:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:20:12.430   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0
00:20:12.430   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:12.430   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:12.430   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:20:12.430   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:12.430   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:12.430   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:12.430   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:12.430   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:12.430   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:12.430   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:12.430   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:12.430   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:12.688  
00:20:12.688    00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:12.688    00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:12.688    00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:12.947   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:12.947    00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:12.947    00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:12.947    00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:12.947    00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:12.947   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:12.947  {
00:20:12.947  "cntlid": 1,
00:20:12.947  "qid": 0,
00:20:12.947  "state": "enabled",
00:20:12.947  "thread": "nvmf_tgt_poll_group_000",
00:20:12.947  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:12.947  "listen_address": {
00:20:12.947  "trtype": "TCP",
00:20:12.947  "adrfam": "IPv4",
00:20:12.947  "traddr": "10.0.0.2",
00:20:12.947  "trsvcid": "4420"
00:20:12.947  },
00:20:12.947  "peer_address": {
00:20:12.947  "trtype": "TCP",
00:20:12.947  "adrfam": "IPv4",
00:20:12.947  "traddr": "10.0.0.1",
00:20:12.947  "trsvcid": "39750"
00:20:12.947  },
00:20:12.947  "auth": {
00:20:12.947  "state": "completed",
00:20:12.947  "digest": "sha256",
00:20:12.947  "dhgroup": "null"
00:20:12.947  }
00:20:12.947  }
00:20:12.947  ]'
00:20:12.947    00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:12.947   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:12.947    00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:12.947   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:20:12.947    00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:12.947   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:12.947   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:12.947   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:13.205   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:13.205   00:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:13.772   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:13.772  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:13.772   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:13.772   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:13.772   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:13.772   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:13.772   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:13.772   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:20:13.772   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:20:14.031   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1
00:20:14.031   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:14.031   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:14.031   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:20:14.031   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:14.031   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:14.031   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:14.031   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:14.031   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:14.031   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:14.031   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:14.031   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:14.031   00:01:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:14.289  
00:20:14.289    00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:14.289    00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:14.289    00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:14.548   00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:14.548    00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:14.548    00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:14.548    00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:14.548    00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:14.548   00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:14.548  {
00:20:14.548  "cntlid": 3,
00:20:14.548  "qid": 0,
00:20:14.548  "state": "enabled",
00:20:14.548  "thread": "nvmf_tgt_poll_group_000",
00:20:14.548  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:14.548  "listen_address": {
00:20:14.548  "trtype": "TCP",
00:20:14.548  "adrfam": "IPv4",
00:20:14.548  "traddr": "10.0.0.2",
00:20:14.548  "trsvcid": "4420"
00:20:14.548  },
00:20:14.548  "peer_address": {
00:20:14.548  "trtype": "TCP",
00:20:14.548  "adrfam": "IPv4",
00:20:14.548  "traddr": "10.0.0.1",
00:20:14.548  "trsvcid": "39766"
00:20:14.548  },
00:20:14.548  "auth": {
00:20:14.548  "state": "completed",
00:20:14.548  "digest": "sha256",
00:20:14.548  "dhgroup": "null"
00:20:14.548  }
00:20:14.548  }
00:20:14.548  ]'
00:20:14.548    00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:14.548   00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:14.548    00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:14.548   00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:20:14.548    00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:14.548   00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:14.548   00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:14.548   00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:14.815   00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:14.815   00:01:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:15.383   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:15.383  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:15.383   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:15.383   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:15.383   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:15.383   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:15.383   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:15.383   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:20:15.383   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:20:15.642   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2
00:20:15.642   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:15.642   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:15.642   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:20:15.642   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:20:15.642   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:15.642   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:15.642   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:15.642   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:15.642   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:15.642   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:15.642   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:15.642   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:15.900  
00:20:15.900    00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:15.900    00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:15.900    00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:16.158   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:16.158    00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:16.158    00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:16.158    00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:16.158    00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:16.158   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:16.158  {
00:20:16.158  "cntlid": 5,
00:20:16.158  "qid": 0,
00:20:16.158  "state": "enabled",
00:20:16.158  "thread": "nvmf_tgt_poll_group_000",
00:20:16.159  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:16.159  "listen_address": {
00:20:16.159  "trtype": "TCP",
00:20:16.159  "adrfam": "IPv4",
00:20:16.159  "traddr": "10.0.0.2",
00:20:16.159  "trsvcid": "4420"
00:20:16.159  },
00:20:16.159  "peer_address": {
00:20:16.159  "trtype": "TCP",
00:20:16.159  "adrfam": "IPv4",
00:20:16.159  "traddr": "10.0.0.1",
00:20:16.159  "trsvcid": "39802"
00:20:16.159  },
00:20:16.159  "auth": {
00:20:16.159  "state": "completed",
00:20:16.159  "digest": "sha256",
00:20:16.159  "dhgroup": "null"
00:20:16.159  }
00:20:16.159  }
00:20:16.159  ]'
00:20:16.159    00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:16.159   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:16.159    00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:16.159   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:20:16.159    00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:16.159   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:16.159   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:16.159   00:01:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:16.417   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:16.417   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:16.984   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:16.984  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:16.984   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:16.984   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:16.984   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:16.984   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:16.984   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:16.984   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:20:16.984   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null
00:20:17.243   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3
00:20:17.243   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:17.243   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:17.243   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:20:17.243   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:17.243   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:17.243   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:20:17.243   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:17.243   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:17.243   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:17.243   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:17.243   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:17.243   00:01:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:17.501  
00:20:17.501    00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:17.501    00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:17.501    00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:17.758   00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:17.758    00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:17.758    00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:17.758    00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:17.758    00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:17.758   00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:17.758  {
00:20:17.758  "cntlid": 7,
00:20:17.758  "qid": 0,
00:20:17.758  "state": "enabled",
00:20:17.758  "thread": "nvmf_tgt_poll_group_000",
00:20:17.758  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:17.758  "listen_address": {
00:20:17.758  "trtype": "TCP",
00:20:17.759  "adrfam": "IPv4",
00:20:17.759  "traddr": "10.0.0.2",
00:20:17.759  "trsvcid": "4420"
00:20:17.759  },
00:20:17.759  "peer_address": {
00:20:17.759  "trtype": "TCP",
00:20:17.759  "adrfam": "IPv4",
00:20:17.759  "traddr": "10.0.0.1",
00:20:17.759  "trsvcid": "39822"
00:20:17.759  },
00:20:17.759  "auth": {
00:20:17.759  "state": "completed",
00:20:17.759  "digest": "sha256",
00:20:17.759  "dhgroup": "null"
00:20:17.759  }
00:20:17.759  }
00:20:17.759  ]'
00:20:17.759    00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:17.759   00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:17.759    00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:17.759   00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:20:17.759    00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:17.759   00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:17.759   00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:17.759   00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:18.016   00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:18.016   00:01:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:18.583   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:18.583  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:18.583   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:18.583   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:18.583   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:18.583   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:18.583   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:18.583   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:18.583   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:20:18.583   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:20:18.842   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0
00:20:18.842   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:18.842   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:18.842   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:20:18.842   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:18.842   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:18.842   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:18.842   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:18.842   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:18.842   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:18.842   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:18.842   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:18.842   00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:19.103  
00:20:19.103    00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:19.103    00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:19.103    00:01:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:19.422   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:19.422    00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:19.422    00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:19.422    00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:19.422    00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:19.422   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:19.422  {
00:20:19.422  "cntlid": 9,
00:20:19.422  "qid": 0,
00:20:19.422  "state": "enabled",
00:20:19.422  "thread": "nvmf_tgt_poll_group_000",
00:20:19.422  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:19.422  "listen_address": {
00:20:19.422  "trtype": "TCP",
00:20:19.422  "adrfam": "IPv4",
00:20:19.422  "traddr": "10.0.0.2",
00:20:19.422  "trsvcid": "4420"
00:20:19.422  },
00:20:19.422  "peer_address": {
00:20:19.422  "trtype": "TCP",
00:20:19.422  "adrfam": "IPv4",
00:20:19.422  "traddr": "10.0.0.1",
00:20:19.422  "trsvcid": "39848"
00:20:19.422  },
00:20:19.422  "auth": {
00:20:19.422  "state": "completed",
00:20:19.422  "digest": "sha256",
00:20:19.422  "dhgroup": "ffdhe2048"
00:20:19.422  }
00:20:19.422  }
00:20:19.422  ]'
00:20:19.422    00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:19.422   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:19.422    00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:19.422   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:20:19.422    00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:19.422   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:19.422   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:19.422   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:19.763   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:19.763   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:20.343   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:20.343  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:20.343   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:20.343   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:20.343   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:20.343   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:20.343   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:20.343   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:20:20.343   00:01:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:20:20.343   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1
00:20:20.343   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:20.343   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:20.343   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:20:20.343   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:20.343   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:20.343   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:20.343   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:20.343   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:20.343   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:20.343   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:20.343   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:20.343   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:20.601  
00:20:20.601    00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:20.601    00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:20.601    00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:20.859   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:20.859    00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:20.859    00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:20.859    00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:20.859    00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:20.859   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:20.859  {
00:20:20.859  "cntlid": 11,
00:20:20.859  "qid": 0,
00:20:20.859  "state": "enabled",
00:20:20.859  "thread": "nvmf_tgt_poll_group_000",
00:20:20.859  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:20.859  "listen_address": {
00:20:20.859  "trtype": "TCP",
00:20:20.859  "adrfam": "IPv4",
00:20:20.859  "traddr": "10.0.0.2",
00:20:20.859  "trsvcid": "4420"
00:20:20.859  },
00:20:20.859  "peer_address": {
00:20:20.859  "trtype": "TCP",
00:20:20.859  "adrfam": "IPv4",
00:20:20.859  "traddr": "10.0.0.1",
00:20:20.859  "trsvcid": "39880"
00:20:20.859  },
00:20:20.859  "auth": {
00:20:20.859  "state": "completed",
00:20:20.859  "digest": "sha256",
00:20:20.859  "dhgroup": "ffdhe2048"
00:20:20.859  }
00:20:20.859  }
00:20:20.859  ]'
00:20:20.859    00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:20.859   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:20.859    00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:21.118   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:20:21.118    00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:21.118   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:21.118   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:21.118   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:21.118   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:21.118   00:01:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:21.686   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:21.686  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:21.686   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:21.686   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:21.686   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:21.946   00:01:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:22.204  
00:20:22.204    00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:22.204    00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:22.204    00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:22.462   00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:22.462    00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:22.462    00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:22.462    00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:22.462    00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:22.462   00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:22.462  {
00:20:22.462  "cntlid": 13,
00:20:22.462  "qid": 0,
00:20:22.462  "state": "enabled",
00:20:22.462  "thread": "nvmf_tgt_poll_group_000",
00:20:22.462  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:22.462  "listen_address": {
00:20:22.462  "trtype": "TCP",
00:20:22.462  "adrfam": "IPv4",
00:20:22.462  "traddr": "10.0.0.2",
00:20:22.462  "trsvcid": "4420"
00:20:22.462  },
00:20:22.462  "peer_address": {
00:20:22.462  "trtype": "TCP",
00:20:22.462  "adrfam": "IPv4",
00:20:22.462  "traddr": "10.0.0.1",
00:20:22.462  "trsvcid": "55352"
00:20:22.462  },
00:20:22.462  "auth": {
00:20:22.462  "state": "completed",
00:20:22.462  "digest": "sha256",
00:20:22.462  "dhgroup": "ffdhe2048"
00:20:22.462  }
00:20:22.462  }
00:20:22.462  ]'
00:20:22.462    00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:22.462   00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:22.462    00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:22.721   00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:20:22.721    00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:22.721   00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:22.721   00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:22.721   00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:22.980   00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:22.980   00:01:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:23.547  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:23.547   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:23.806  
00:20:23.806    00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:23.806    00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:23.806    00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:24.065   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:24.065    00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:24.065    00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:24.065    00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:24.065    00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:24.065   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:24.065  {
00:20:24.065  "cntlid": 15,
00:20:24.065  "qid": 0,
00:20:24.065  "state": "enabled",
00:20:24.065  "thread": "nvmf_tgt_poll_group_000",
00:20:24.065  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:24.065  "listen_address": {
00:20:24.065  "trtype": "TCP",
00:20:24.065  "adrfam": "IPv4",
00:20:24.065  "traddr": "10.0.0.2",
00:20:24.065  "trsvcid": "4420"
00:20:24.065  },
00:20:24.065  "peer_address": {
00:20:24.065  "trtype": "TCP",
00:20:24.065  "adrfam": "IPv4",
00:20:24.065  "traddr": "10.0.0.1",
00:20:24.065  "trsvcid": "55392"
00:20:24.065  },
00:20:24.065  "auth": {
00:20:24.065  "state": "completed",
00:20:24.065  "digest": "sha256",
00:20:24.065  "dhgroup": "ffdhe2048"
00:20:24.065  }
00:20:24.065  }
00:20:24.065  ]'
00:20:24.065    00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:24.065   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:24.065    00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:24.065   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:20:24.065    00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:24.324   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:24.324   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:24.324   00:01:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:24.325   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:24.325   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:24.892   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:24.892  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:24.892   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:24.892   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:24.892   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:24.892   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:24.892   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:24.892   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:24.892   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:20:24.892   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:20:25.151   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0
00:20:25.151   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:25.151   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:25.151   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:20:25.151   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:25.151   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:25.151   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:25.151   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:25.151   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:25.151   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:25.151   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:25.151   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:25.151   00:01:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:25.412  
00:20:25.412    00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:25.412    00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:25.412    00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:25.670   00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:25.670    00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:25.670    00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:25.670    00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:25.670    00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:25.670   00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:25.670  {
00:20:25.670  "cntlid": 17,
00:20:25.670  "qid": 0,
00:20:25.670  "state": "enabled",
00:20:25.670  "thread": "nvmf_tgt_poll_group_000",
00:20:25.670  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:25.670  "listen_address": {
00:20:25.670  "trtype": "TCP",
00:20:25.670  "adrfam": "IPv4",
00:20:25.670  "traddr": "10.0.0.2",
00:20:25.670  "trsvcid": "4420"
00:20:25.670  },
00:20:25.670  "peer_address": {
00:20:25.670  "trtype": "TCP",
00:20:25.670  "adrfam": "IPv4",
00:20:25.670  "traddr": "10.0.0.1",
00:20:25.670  "trsvcid": "55424"
00:20:25.670  },
00:20:25.670  "auth": {
00:20:25.670  "state": "completed",
00:20:25.670  "digest": "sha256",
00:20:25.670  "dhgroup": "ffdhe3072"
00:20:25.670  }
00:20:25.670  }
00:20:25.670  ]'
00:20:25.670    00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:25.670   00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:25.670    00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:25.670   00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:20:25.670    00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:25.928   00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:25.928   00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:25.928   00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:25.928   00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:25.928   00:01:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:26.495   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:26.495  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:26.495   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:26.495   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:26.495   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:26.495   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:26.495   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:26.495   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:20:26.495   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:20:26.754   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1
00:20:26.754   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:26.754   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:26.754   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:20:26.754   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:26.754   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:26.754   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:26.754   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:26.754   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:26.754   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:26.754   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:26.754   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:26.754   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:27.012  
00:20:27.012    00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:27.012    00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:27.012    00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:27.271   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:27.271    00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:27.272    00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:27.272    00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:27.272    00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:27.272   00:01:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:27.272  {
00:20:27.272  "cntlid": 19,
00:20:27.272  "qid": 0,
00:20:27.272  "state": "enabled",
00:20:27.272  "thread": "nvmf_tgt_poll_group_000",
00:20:27.272  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:27.272  "listen_address": {
00:20:27.272  "trtype": "TCP",
00:20:27.272  "adrfam": "IPv4",
00:20:27.272  "traddr": "10.0.0.2",
00:20:27.272  "trsvcid": "4420"
00:20:27.272  },
00:20:27.272  "peer_address": {
00:20:27.272  "trtype": "TCP",
00:20:27.272  "adrfam": "IPv4",
00:20:27.272  "traddr": "10.0.0.1",
00:20:27.272  "trsvcid": "55442"
00:20:27.272  },
00:20:27.272  "auth": {
00:20:27.272  "state": "completed",
00:20:27.272  "digest": "sha256",
00:20:27.272  "dhgroup": "ffdhe3072"
00:20:27.272  }
00:20:27.272  }
00:20:27.272  ]'
00:20:27.272    00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:27.272   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:27.272    00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:27.272   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:20:27.272    00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:27.272   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:27.272   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:27.272   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:27.530   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:27.530   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:28.098   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:28.098  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:28.098   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:28.098   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:28.098   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:28.098   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:28.098   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:28.098   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:20:28.098   00:01:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:20:28.358   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2
00:20:28.358   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:28.358   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:28.358   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:20:28.358   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:20:28.358   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:28.358   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:28.358   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:28.358   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:28.358   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:28.358   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:28.358   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:28.358   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:28.617  
00:20:28.617    00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:28.617    00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:28.618    00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:28.877   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:28.877    00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:28.877    00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:28.877    00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:28.877    00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:28.877   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:28.877  {
00:20:28.877  "cntlid": 21,
00:20:28.877  "qid": 0,
00:20:28.877  "state": "enabled",
00:20:28.877  "thread": "nvmf_tgt_poll_group_000",
00:20:28.877  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:28.877  "listen_address": {
00:20:28.877  "trtype": "TCP",
00:20:28.877  "adrfam": "IPv4",
00:20:28.877  "traddr": "10.0.0.2",
00:20:28.877  "trsvcid": "4420"
00:20:28.877  },
00:20:28.877  "peer_address": {
00:20:28.877  "trtype": "TCP",
00:20:28.877  "adrfam": "IPv4",
00:20:28.877  "traddr": "10.0.0.1",
00:20:28.877  "trsvcid": "55470"
00:20:28.877  },
00:20:28.877  "auth": {
00:20:28.877  "state": "completed",
00:20:28.877  "digest": "sha256",
00:20:28.877  "dhgroup": "ffdhe3072"
00:20:28.877  }
00:20:28.877  }
00:20:28.877  ]'
00:20:28.877    00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:28.877   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:28.877    00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:28.877   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:20:28.877    00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:28.877   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:28.877   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:28.877   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:29.134   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:29.134   00:01:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:29.702   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:29.702  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:29.702   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:29.702   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:29.702   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:29.702   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:29.702   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:29.702   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:20:29.702   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:20:29.960   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3
00:20:29.960   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:29.960   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:29.960   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:20:29.960   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:29.960   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:29.960   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:20:29.960   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:29.960   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:29.960   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:29.960   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:29.960   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:29.960   00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:30.218  
00:20:30.218    00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:30.218    00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:30.218    00:01:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:30.477   00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:30.477    00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:30.477    00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:30.477    00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:30.477    00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:30.477   00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:30.477  {
00:20:30.477  "cntlid": 23,
00:20:30.477  "qid": 0,
00:20:30.477  "state": "enabled",
00:20:30.477  "thread": "nvmf_tgt_poll_group_000",
00:20:30.477  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:30.477  "listen_address": {
00:20:30.477  "trtype": "TCP",
00:20:30.477  "adrfam": "IPv4",
00:20:30.477  "traddr": "10.0.0.2",
00:20:30.477  "trsvcid": "4420"
00:20:30.477  },
00:20:30.477  "peer_address": {
00:20:30.477  "trtype": "TCP",
00:20:30.477  "adrfam": "IPv4",
00:20:30.477  "traddr": "10.0.0.1",
00:20:30.477  "trsvcid": "55502"
00:20:30.477  },
00:20:30.477  "auth": {
00:20:30.477  "state": "completed",
00:20:30.477  "digest": "sha256",
00:20:30.477  "dhgroup": "ffdhe3072"
00:20:30.477  }
00:20:30.477  }
00:20:30.477  ]'
00:20:30.477    00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:30.477   00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:30.477    00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:30.477   00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:20:30.477    00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:30.477   00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:30.477   00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:30.477   00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:30.736   00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:30.736   00:01:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:31.304   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:31.304  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:31.304   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:31.304   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:31.305   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:31.305   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:31.305   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:31.305   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:31.305   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:20:31.305   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:20:31.563   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0
00:20:31.563   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:31.563   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:31.563   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:20:31.563   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:31.563   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:31.563   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:31.563   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:31.563   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:31.563   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:31.563   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:31.563   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:31.563   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:31.822  
00:20:31.822    00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:31.822    00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:31.822    00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:32.081   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:32.081    00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:32.081    00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:32.081    00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:32.081    00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:32.081   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:32.081  {
00:20:32.081  "cntlid": 25,
00:20:32.081  "qid": 0,
00:20:32.081  "state": "enabled",
00:20:32.082  "thread": "nvmf_tgt_poll_group_000",
00:20:32.082  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:32.082  "listen_address": {
00:20:32.082  "trtype": "TCP",
00:20:32.082  "adrfam": "IPv4",
00:20:32.082  "traddr": "10.0.0.2",
00:20:32.082  "trsvcid": "4420"
00:20:32.082  },
00:20:32.082  "peer_address": {
00:20:32.082  "trtype": "TCP",
00:20:32.082  "adrfam": "IPv4",
00:20:32.082  "traddr": "10.0.0.1",
00:20:32.082  "trsvcid": "56444"
00:20:32.082  },
00:20:32.082  "auth": {
00:20:32.082  "state": "completed",
00:20:32.082  "digest": "sha256",
00:20:32.082  "dhgroup": "ffdhe4096"
00:20:32.082  }
00:20:32.082  }
00:20:32.082  ]'
00:20:32.082    00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:32.082   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:32.082    00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:32.082   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:20:32.082    00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:32.082   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:32.082   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:32.082   00:01:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:32.341   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:32.341   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:32.910   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:32.910  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:32.910   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:32.910   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:32.910   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:32.910   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:32.910   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:32.910   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:20:32.910   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:20:33.168   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1
00:20:33.168   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:33.168   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:33.168   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:20:33.168   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:33.168   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:33.168   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:33.168   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:33.168   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:33.168   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:33.168   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:33.168   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:33.168   00:01:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:33.426  
00:20:33.426    00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:33.426    00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:33.426    00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:33.692   00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:33.692    00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:33.692    00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:33.692    00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:33.692    00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:33.692   00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:33.692  {
00:20:33.692  "cntlid": 27,
00:20:33.692  "qid": 0,
00:20:33.692  "state": "enabled",
00:20:33.692  "thread": "nvmf_tgt_poll_group_000",
00:20:33.692  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:33.692  "listen_address": {
00:20:33.692  "trtype": "TCP",
00:20:33.692  "adrfam": "IPv4",
00:20:33.692  "traddr": "10.0.0.2",
00:20:33.692  "trsvcid": "4420"
00:20:33.692  },
00:20:33.692  "peer_address": {
00:20:33.692  "trtype": "TCP",
00:20:33.692  "adrfam": "IPv4",
00:20:33.692  "traddr": "10.0.0.1",
00:20:33.692  "trsvcid": "56476"
00:20:33.692  },
00:20:33.692  "auth": {
00:20:33.692  "state": "completed",
00:20:33.692  "digest": "sha256",
00:20:33.692  "dhgroup": "ffdhe4096"
00:20:33.692  }
00:20:33.692  }
00:20:33.692  ]'
00:20:33.692    00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:33.692   00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:33.692    00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:33.692   00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:20:33.692    00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:33.692   00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:33.692   00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:33.692   00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:33.953   00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:33.954   00:01:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:34.520   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:34.521  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:34.521   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:34.521   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:34.521   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:34.521   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:34.521   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:34.521   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:20:34.521   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:20:34.780   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2
00:20:34.780   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:34.780   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:34.780   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:20:34.780   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:20:34.780   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:34.780   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:34.780   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:34.780   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:34.780   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:34.780   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:34.780   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:34.780   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:35.039  
00:20:35.039    00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:35.039    00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:35.039    00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:35.300   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:35.300    00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:35.300    00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:35.300    00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:35.300    00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:35.300   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:35.300  {
00:20:35.300  "cntlid": 29,
00:20:35.300  "qid": 0,
00:20:35.300  "state": "enabled",
00:20:35.300  "thread": "nvmf_tgt_poll_group_000",
00:20:35.300  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:35.300  "listen_address": {
00:20:35.300  "trtype": "TCP",
00:20:35.300  "adrfam": "IPv4",
00:20:35.300  "traddr": "10.0.0.2",
00:20:35.300  "trsvcid": "4420"
00:20:35.300  },
00:20:35.300  "peer_address": {
00:20:35.300  "trtype": "TCP",
00:20:35.300  "adrfam": "IPv4",
00:20:35.300  "traddr": "10.0.0.1",
00:20:35.300  "trsvcid": "56510"
00:20:35.300  },
00:20:35.300  "auth": {
00:20:35.300  "state": "completed",
00:20:35.300  "digest": "sha256",
00:20:35.300  "dhgroup": "ffdhe4096"
00:20:35.300  }
00:20:35.300  }
00:20:35.300  ]'
00:20:35.300    00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:35.300   00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:35.300    00:01:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:35.300   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:20:35.300    00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:35.300   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:35.300   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:35.300   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:35.562   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:35.562   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:36.129   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:36.129  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:36.129   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:36.129   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:36.129   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:36.129   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:36.129   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:36.129   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:20:36.129   00:01:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:20:36.388   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3
00:20:36.388   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:36.388   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:36.388   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:20:36.388   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:36.388   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:36.388   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:20:36.388   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:36.388   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:36.388   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:36.388   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:36.388   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:36.388   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:36.646  
00:20:36.646    00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:36.646    00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:36.646    00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:36.905   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:36.905    00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:36.905    00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:36.905    00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:36.905    00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:36.905   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:36.905  {
00:20:36.905  "cntlid": 31,
00:20:36.905  "qid": 0,
00:20:36.905  "state": "enabled",
00:20:36.905  "thread": "nvmf_tgt_poll_group_000",
00:20:36.905  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:36.905  "listen_address": {
00:20:36.905  "trtype": "TCP",
00:20:36.905  "adrfam": "IPv4",
00:20:36.905  "traddr": "10.0.0.2",
00:20:36.905  "trsvcid": "4420"
00:20:36.905  },
00:20:36.905  "peer_address": {
00:20:36.905  "trtype": "TCP",
00:20:36.905  "adrfam": "IPv4",
00:20:36.905  "traddr": "10.0.0.1",
00:20:36.905  "trsvcid": "56532"
00:20:36.905  },
00:20:36.905  "auth": {
00:20:36.905  "state": "completed",
00:20:36.905  "digest": "sha256",
00:20:36.905  "dhgroup": "ffdhe4096"
00:20:36.905  }
00:20:36.906  }
00:20:36.906  ]'
00:20:36.906    00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:36.906   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:36.906    00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:36.906   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:20:36.906    00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:36.906   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:36.906   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:36.906   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:37.165   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:37.165   00:01:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:37.732   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:37.732  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:37.732   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:37.732   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:37.732   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:37.732   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:37.732   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:37.732   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:37.732   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:20:37.732   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:20:37.991   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0
00:20:37.991   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:37.991   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:37.991   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:20:37.991   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:37.991   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:37.991   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:37.991   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:37.991   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:37.991   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:37.991   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:37.991   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:37.991   00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:38.249  
00:20:38.249    00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:38.249    00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:38.249    00:01:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:38.508   00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:38.508    00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:38.508    00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:38.508    00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:38.508    00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:38.508   00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:38.508  {
00:20:38.508  "cntlid": 33,
00:20:38.508  "qid": 0,
00:20:38.508  "state": "enabled",
00:20:38.509  "thread": "nvmf_tgt_poll_group_000",
00:20:38.509  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:38.509  "listen_address": {
00:20:38.509  "trtype": "TCP",
00:20:38.509  "adrfam": "IPv4",
00:20:38.509  "traddr": "10.0.0.2",
00:20:38.509  "trsvcid": "4420"
00:20:38.509  },
00:20:38.509  "peer_address": {
00:20:38.509  "trtype": "TCP",
00:20:38.509  "adrfam": "IPv4",
00:20:38.509  "traddr": "10.0.0.1",
00:20:38.509  "trsvcid": "56558"
00:20:38.509  },
00:20:38.509  "auth": {
00:20:38.509  "state": "completed",
00:20:38.509  "digest": "sha256",
00:20:38.509  "dhgroup": "ffdhe6144"
00:20:38.509  }
00:20:38.509  }
00:20:38.509  ]'
00:20:38.509    00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:38.509   00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:38.509    00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:38.509   00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:20:38.509    00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:38.509   00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:38.509   00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:38.509   00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:38.767   00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:38.768   00:01:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:39.336   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:39.336  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:39.336   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:39.336   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:39.336   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:39.336   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:39.336   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:39.336   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:20:39.337   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:20:39.595   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1
00:20:39.595   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:39.595   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:39.595   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:20:39.595   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:39.595   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:39.595   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:39.595   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:39.595   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:39.595   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:39.595   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:39.595   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:39.595   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:39.854  
00:20:39.854    00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:39.854    00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:39.854    00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:40.111   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:40.111    00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:40.111    00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:40.111    00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:40.112    00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:40.112   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:40.112  {
00:20:40.112  "cntlid": 35,
00:20:40.112  "qid": 0,
00:20:40.112  "state": "enabled",
00:20:40.112  "thread": "nvmf_tgt_poll_group_000",
00:20:40.112  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:40.112  "listen_address": {
00:20:40.112  "trtype": "TCP",
00:20:40.112  "adrfam": "IPv4",
00:20:40.112  "traddr": "10.0.0.2",
00:20:40.112  "trsvcid": "4420"
00:20:40.112  },
00:20:40.112  "peer_address": {
00:20:40.112  "trtype": "TCP",
00:20:40.112  "adrfam": "IPv4",
00:20:40.112  "traddr": "10.0.0.1",
00:20:40.112  "trsvcid": "56582"
00:20:40.112  },
00:20:40.112  "auth": {
00:20:40.112  "state": "completed",
00:20:40.112  "digest": "sha256",
00:20:40.112  "dhgroup": "ffdhe6144"
00:20:40.112  }
00:20:40.112  }
00:20:40.112  ]'
00:20:40.112    00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:40.112   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:40.112    00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:40.112   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:20:40.112    00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:40.112   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:40.112   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:40.112   00:01:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:40.371   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:40.371   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:40.938   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:40.938  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:40.938   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:40.938   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:40.938   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:40.938   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:40.938   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:40.938   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:20:40.938   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:20:41.197   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2
00:20:41.197   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:41.197   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:41.197   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:20:41.197   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:20:41.197   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:41.197   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:41.197   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:41.197   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:41.197   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:41.197   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:41.197   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:41.197   00:01:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:41.455  
00:20:41.455    00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:41.455    00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:41.455    00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:41.714   00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:41.714    00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:41.714    00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:41.714    00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:41.714    00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:41.714   00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:41.714  {
00:20:41.714  "cntlid": 37,
00:20:41.714  "qid": 0,
00:20:41.714  "state": "enabled",
00:20:41.714  "thread": "nvmf_tgt_poll_group_000",
00:20:41.714  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:41.714  "listen_address": {
00:20:41.714  "trtype": "TCP",
00:20:41.714  "adrfam": "IPv4",
00:20:41.714  "traddr": "10.0.0.2",
00:20:41.714  "trsvcid": "4420"
00:20:41.714  },
00:20:41.714  "peer_address": {
00:20:41.714  "trtype": "TCP",
00:20:41.714  "adrfam": "IPv4",
00:20:41.714  "traddr": "10.0.0.1",
00:20:41.714  "trsvcid": "35580"
00:20:41.714  },
00:20:41.714  "auth": {
00:20:41.714  "state": "completed",
00:20:41.714  "digest": "sha256",
00:20:41.714  "dhgroup": "ffdhe6144"
00:20:41.714  }
00:20:41.714  }
00:20:41.714  ]'
00:20:41.714    00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:41.714   00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:41.714    00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:41.714   00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:20:41.714    00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:41.973   00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:41.973   00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:41.973   00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:41.973   00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:41.973   00:01:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:42.541   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:42.541  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:42.541   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:42.541   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:42.541   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:42.541   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:42.541   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:42.541   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:20:42.541   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:20:42.800   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3
00:20:42.800   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:42.800   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:42.800   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:20:42.800   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:42.800   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:42.800   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:20:42.800   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:42.800   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:42.800   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:42.800   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:42.800   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:42.800   00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:43.057  
00:20:43.315    00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:43.315    00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:43.315    00:01:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:43.315   00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:43.315    00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:43.315    00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:43.315    00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:43.315    00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:43.315   00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:43.315  {
00:20:43.315  "cntlid": 39,
00:20:43.315  "qid": 0,
00:20:43.315  "state": "enabled",
00:20:43.315  "thread": "nvmf_tgt_poll_group_000",
00:20:43.315  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:43.315  "listen_address": {
00:20:43.315  "trtype": "TCP",
00:20:43.315  "adrfam": "IPv4",
00:20:43.315  "traddr": "10.0.0.2",
00:20:43.315  "trsvcid": "4420"
00:20:43.315  },
00:20:43.315  "peer_address": {
00:20:43.315  "trtype": "TCP",
00:20:43.315  "adrfam": "IPv4",
00:20:43.315  "traddr": "10.0.0.1",
00:20:43.315  "trsvcid": "35614"
00:20:43.316  },
00:20:43.316  "auth": {
00:20:43.316  "state": "completed",
00:20:43.316  "digest": "sha256",
00:20:43.316  "dhgroup": "ffdhe6144"
00:20:43.316  }
00:20:43.316  }
00:20:43.316  ]'
00:20:43.316    00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:43.316   00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:43.316    00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:43.574   00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:20:43.574    00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:43.574   00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:43.574   00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:43.574   00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:43.834   00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:43.835   00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:44.408   00:01:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:44.408  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:44.408   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:44.976  
00:20:44.976    00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:44.976    00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:44.976    00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:45.235   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:45.235    00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:45.235    00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:45.235    00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:45.235    00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:45.235   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:45.235  {
00:20:45.235  "cntlid": 41,
00:20:45.235  "qid": 0,
00:20:45.235  "state": "enabled",
00:20:45.235  "thread": "nvmf_tgt_poll_group_000",
00:20:45.235  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:45.235  "listen_address": {
00:20:45.235  "trtype": "TCP",
00:20:45.235  "adrfam": "IPv4",
00:20:45.235  "traddr": "10.0.0.2",
00:20:45.235  "trsvcid": "4420"
00:20:45.235  },
00:20:45.235  "peer_address": {
00:20:45.235  "trtype": "TCP",
00:20:45.235  "adrfam": "IPv4",
00:20:45.235  "traddr": "10.0.0.1",
00:20:45.235  "trsvcid": "35640"
00:20:45.235  },
00:20:45.235  "auth": {
00:20:45.235  "state": "completed",
00:20:45.235  "digest": "sha256",
00:20:45.235  "dhgroup": "ffdhe8192"
00:20:45.235  }
00:20:45.235  }
00:20:45.235  ]'
00:20:45.235    00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:45.235   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:45.235    00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:45.235   00:02:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:20:45.235    00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:45.235   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:45.235   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:45.236   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:45.502   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:45.502   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:46.072   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:46.072  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:46.072   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:46.072   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:46.072   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:46.072   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:46.072   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:46.072   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:20:46.072   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:20:46.329   00:02:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1
00:20:46.329   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:46.329   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:46.329   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:20:46.329   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:46.329   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:46.329   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:46.329   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:46.329   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:46.329   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:46.329   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:46.329   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:46.329   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:46.895  
00:20:46.895    00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:46.895    00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:46.895    00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:46.895   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:46.895    00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:46.895    00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:46.895    00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:46.895    00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:46.895   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:46.895  {
00:20:46.895  "cntlid": 43,
00:20:46.895  "qid": 0,
00:20:46.895  "state": "enabled",
00:20:46.895  "thread": "nvmf_tgt_poll_group_000",
00:20:46.895  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:46.895  "listen_address": {
00:20:46.895  "trtype": "TCP",
00:20:46.895  "adrfam": "IPv4",
00:20:46.895  "traddr": "10.0.0.2",
00:20:46.895  "trsvcid": "4420"
00:20:46.895  },
00:20:46.895  "peer_address": {
00:20:46.895  "trtype": "TCP",
00:20:46.895  "adrfam": "IPv4",
00:20:46.895  "traddr": "10.0.0.1",
00:20:46.895  "trsvcid": "35678"
00:20:46.895  },
00:20:46.895  "auth": {
00:20:46.895  "state": "completed",
00:20:46.895  "digest": "sha256",
00:20:46.895  "dhgroup": "ffdhe8192"
00:20:46.895  }
00:20:46.895  }
00:20:46.895  ]'
00:20:46.895    00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:46.895   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:46.895    00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:47.153   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:20:47.153    00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:47.153   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:47.153   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:47.153   00:02:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:47.412   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:47.412   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:47.988  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:47.988   00:02:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:48.553  
00:20:48.553    00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:48.553    00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:48.553    00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:48.811   00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:48.811    00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:48.811    00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:48.811    00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:48.811    00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:48.811   00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:48.811  {
00:20:48.811  "cntlid": 45,
00:20:48.811  "qid": 0,
00:20:48.811  "state": "enabled",
00:20:48.811  "thread": "nvmf_tgt_poll_group_000",
00:20:48.811  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:48.811  "listen_address": {
00:20:48.811  "trtype": "TCP",
00:20:48.811  "adrfam": "IPv4",
00:20:48.811  "traddr": "10.0.0.2",
00:20:48.811  "trsvcid": "4420"
00:20:48.811  },
00:20:48.811  "peer_address": {
00:20:48.811  "trtype": "TCP",
00:20:48.811  "adrfam": "IPv4",
00:20:48.811  "traddr": "10.0.0.1",
00:20:48.811  "trsvcid": "35698"
00:20:48.811  },
00:20:48.811  "auth": {
00:20:48.811  "state": "completed",
00:20:48.811  "digest": "sha256",
00:20:48.811  "dhgroup": "ffdhe8192"
00:20:48.811  }
00:20:48.811  }
00:20:48.811  ]'
00:20:48.811    00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:48.811   00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:48.811    00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:48.811   00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:20:48.811    00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:48.811   00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:48.811   00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:48.811   00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:49.070   00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:49.070   00:02:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:49.637   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:49.637  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:49.637   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:49.637   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:49.637   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:49.637   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:49.637   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:49.637   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:20:49.637   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:20:49.897   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3
00:20:49.897   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:49.897   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256
00:20:49.897   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:20:49.897   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:49.897   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:49.897   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:20:49.897   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:49.897   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:49.897   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:49.897   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:49.897   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:49.897   00:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:50.465  
00:20:50.465    00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:50.465    00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:50.465    00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:50.465   00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:50.465    00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:50.465    00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:50.465    00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:50.465    00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:50.465   00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:50.465  {
00:20:50.465  "cntlid": 47,
00:20:50.465  "qid": 0,
00:20:50.465  "state": "enabled",
00:20:50.465  "thread": "nvmf_tgt_poll_group_000",
00:20:50.465  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:50.465  "listen_address": {
00:20:50.465  "trtype": "TCP",
00:20:50.465  "adrfam": "IPv4",
00:20:50.465  "traddr": "10.0.0.2",
00:20:50.465  "trsvcid": "4420"
00:20:50.465  },
00:20:50.465  "peer_address": {
00:20:50.465  "trtype": "TCP",
00:20:50.465  "adrfam": "IPv4",
00:20:50.465  "traddr": "10.0.0.1",
00:20:50.465  "trsvcid": "35728"
00:20:50.465  },
00:20:50.465  "auth": {
00:20:50.465  "state": "completed",
00:20:50.465  "digest": "sha256",
00:20:50.465  "dhgroup": "ffdhe8192"
00:20:50.465  }
00:20:50.465  }
00:20:50.465  ]'
00:20:50.465    00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:50.723   00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]]
00:20:50.723    00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:50.723   00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:20:50.723    00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:50.723   00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:50.723   00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:50.724   00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:50.982   00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:50.982   00:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:51.549   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:51.550  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}"
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:51.550   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:51.808  
00:20:51.808    00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:51.808    00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:51.808    00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:52.066   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:52.066    00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:52.066    00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:52.066    00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:52.066    00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:52.066   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:52.066  {
00:20:52.066  "cntlid": 49,
00:20:52.066  "qid": 0,
00:20:52.066  "state": "enabled",
00:20:52.066  "thread": "nvmf_tgt_poll_group_000",
00:20:52.066  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:52.066  "listen_address": {
00:20:52.066  "trtype": "TCP",
00:20:52.066  "adrfam": "IPv4",
00:20:52.066  "traddr": "10.0.0.2",
00:20:52.066  "trsvcid": "4420"
00:20:52.066  },
00:20:52.066  "peer_address": {
00:20:52.066  "trtype": "TCP",
00:20:52.066  "adrfam": "IPv4",
00:20:52.066  "traddr": "10.0.0.1",
00:20:52.066  "trsvcid": "39840"
00:20:52.066  },
00:20:52.066  "auth": {
00:20:52.066  "state": "completed",
00:20:52.066  "digest": "sha384",
00:20:52.066  "dhgroup": "null"
00:20:52.066  }
00:20:52.066  }
00:20:52.066  ]'
00:20:52.066    00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:52.066   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:20:52.066    00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:52.325   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:20:52.325    00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:52.325   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:52.325   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:52.325   00:02:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:52.325   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:52.325   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:52.892   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:53.151  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:53.151   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:53.152   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:53.152   00:02:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:53.410  
00:20:53.410    00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:53.410    00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:53.410    00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:53.669   00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:53.669    00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:53.669    00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:53.669    00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:53.669    00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:53.669   00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:53.669  {
00:20:53.669  "cntlid": 51,
00:20:53.669  "qid": 0,
00:20:53.669  "state": "enabled",
00:20:53.669  "thread": "nvmf_tgt_poll_group_000",
00:20:53.669  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:53.669  "listen_address": {
00:20:53.669  "trtype": "TCP",
00:20:53.669  "adrfam": "IPv4",
00:20:53.669  "traddr": "10.0.0.2",
00:20:53.669  "trsvcid": "4420"
00:20:53.669  },
00:20:53.669  "peer_address": {
00:20:53.669  "trtype": "TCP",
00:20:53.669  "adrfam": "IPv4",
00:20:53.669  "traddr": "10.0.0.1",
00:20:53.669  "trsvcid": "39878"
00:20:53.669  },
00:20:53.669  "auth": {
00:20:53.669  "state": "completed",
00:20:53.669  "digest": "sha384",
00:20:53.669  "dhgroup": "null"
00:20:53.669  }
00:20:53.669  }
00:20:53.669  ]'
00:20:53.669    00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:53.669   00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:20:53.669    00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:53.669   00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:20:53.669    00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:53.929   00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:53.929   00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:53.929   00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:53.929   00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:53.929   00:02:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:20:54.495   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:54.496  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:54.496   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:54.496   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:54.496   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:54.496   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:54.496   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:54.496   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:20:54.496   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:20:54.754   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2
00:20:54.754   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:54.754   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:20:54.754   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:20:54.754   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:20:54.754   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:54.755   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:54.755   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:54.755   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:54.755   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:54.755   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:54.755   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:54.755   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:20:55.014  
00:20:55.014    00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:55.014    00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:55.014    00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:55.273   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:55.273    00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:55.273    00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:55.273    00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:55.273    00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:55.273   00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:55.273  {
00:20:55.273  "cntlid": 53,
00:20:55.273  "qid": 0,
00:20:55.273  "state": "enabled",
00:20:55.273  "thread": "nvmf_tgt_poll_group_000",
00:20:55.273  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:55.273  "listen_address": {
00:20:55.273  "trtype": "TCP",
00:20:55.273  "adrfam": "IPv4",
00:20:55.273  "traddr": "10.0.0.2",
00:20:55.273  "trsvcid": "4420"
00:20:55.273  },
00:20:55.273  "peer_address": {
00:20:55.273  "trtype": "TCP",
00:20:55.273  "adrfam": "IPv4",
00:20:55.273  "traddr": "10.0.0.1",
00:20:55.273  "trsvcid": "39908"
00:20:55.273  },
00:20:55.273  "auth": {
00:20:55.273  "state": "completed",
00:20:55.273  "digest": "sha384",
00:20:55.273  "dhgroup": "null"
00:20:55.273  }
00:20:55.273  }
00:20:55.273  ]'
00:20:55.273    00:02:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:55.273   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:20:55.273    00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:55.273   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:20:55.273    00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:55.273   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:55.273   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:55.273   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:55.532   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:55.532   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:20:56.100   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:56.100  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:56.100   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:56.100   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:56.101   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:56.101   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:56.101   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:56.101   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:20:56.101   00:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null
00:20:56.381   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3
00:20:56.381   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:56.381   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:20:56.381   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:20:56.381   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:20:56.381   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:56.381   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:20:56.381   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:56.381   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:56.381   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:56.381   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:20:56.381   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:56.381   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:20:56.672  
00:20:56.672    00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:56.672    00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:56.672    00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:56.935   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:56.935    00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:56.935    00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:56.935    00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:56.935    00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:56.935   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:56.935  {
00:20:56.935  "cntlid": 55,
00:20:56.935  "qid": 0,
00:20:56.935  "state": "enabled",
00:20:56.935  "thread": "nvmf_tgt_poll_group_000",
00:20:56.935  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:56.935  "listen_address": {
00:20:56.935  "trtype": "TCP",
00:20:56.935  "adrfam": "IPv4",
00:20:56.935  "traddr": "10.0.0.2",
00:20:56.935  "trsvcid": "4420"
00:20:56.935  },
00:20:56.935  "peer_address": {
00:20:56.935  "trtype": "TCP",
00:20:56.935  "adrfam": "IPv4",
00:20:56.935  "traddr": "10.0.0.1",
00:20:56.935  "trsvcid": "39934"
00:20:56.935  },
00:20:56.935  "auth": {
00:20:56.935  "state": "completed",
00:20:56.935  "digest": "sha384",
00:20:56.935  "dhgroup": "null"
00:20:56.935  }
00:20:56.935  }
00:20:56.935  ]'
00:20:56.935    00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:56.935   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:20:56.935    00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:56.935   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:20:56.935    00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:56.936   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:56.936   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:56.936   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:57.195   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:57.195   00:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:20:57.773   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:57.773  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:57.773   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:57.773   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:57.773   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:57.773   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:57.773   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:20:57.773   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:57.773   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:20:57.773   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:20:58.031   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0
00:20:58.032   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:58.032   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:20:58.032   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:20:58.032   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:20:58.032   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:58.032   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:58.032   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:58.032   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:58.032   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:58.032   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:58.032   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:58.032   00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:20:58.291  
00:20:58.291    00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:58.291    00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:58.291    00:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:58.291   00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:58.291    00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:58.291    00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:58.291    00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:58.552    00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:58.552   00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:58.552  {
00:20:58.552  "cntlid": 57,
00:20:58.552  "qid": 0,
00:20:58.552  "state": "enabled",
00:20:58.552  "thread": "nvmf_tgt_poll_group_000",
00:20:58.552  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:58.552  "listen_address": {
00:20:58.552  "trtype": "TCP",
00:20:58.552  "adrfam": "IPv4",
00:20:58.552  "traddr": "10.0.0.2",
00:20:58.552  "trsvcid": "4420"
00:20:58.552  },
00:20:58.552  "peer_address": {
00:20:58.552  "trtype": "TCP",
00:20:58.552  "adrfam": "IPv4",
00:20:58.552  "traddr": "10.0.0.1",
00:20:58.552  "trsvcid": "39972"
00:20:58.552  },
00:20:58.552  "auth": {
00:20:58.552  "state": "completed",
00:20:58.552  "digest": "sha384",
00:20:58.552  "dhgroup": "ffdhe2048"
00:20:58.552  }
00:20:58.552  }
00:20:58.552  ]'
00:20:58.552    00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:20:58.552   00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:20:58.552    00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:20:58.552   00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:20:58.552    00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:20:58.552   00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:20:58.552   00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:20:58.552   00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:20:58.814   00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:58.814   00:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:20:59.378   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:20:59.378  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:20:59.378   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:20:59.378   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:59.378   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:59.378   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:59.378   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:20:59.378   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:20:59.378   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:20:59.635   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1
00:20:59.635   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:20:59.635   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:20:59.635   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:20:59.635   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:20:59.635   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:20:59.635   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:59.635   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:59.635   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:59.635   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:59.635   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:59.635   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:59.635   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:20:59.893  
00:20:59.893    00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:20:59.893    00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:20:59.893    00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:20:59.893   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:20:59.893    00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:20:59.893    00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:20:59.893    00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:20:59.893    00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:20:59.893   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:20:59.893  {
00:20:59.893  "cntlid": 59,
00:20:59.893  "qid": 0,
00:20:59.893  "state": "enabled",
00:20:59.893  "thread": "nvmf_tgt_poll_group_000",
00:20:59.893  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:20:59.893  "listen_address": {
00:20:59.893  "trtype": "TCP",
00:20:59.893  "adrfam": "IPv4",
00:20:59.893  "traddr": "10.0.0.2",
00:20:59.893  "trsvcid": "4420"
00:20:59.893  },
00:20:59.893  "peer_address": {
00:20:59.893  "trtype": "TCP",
00:20:59.893  "adrfam": "IPv4",
00:20:59.893  "traddr": "10.0.0.1",
00:20:59.893  "trsvcid": "40002"
00:20:59.893  },
00:20:59.893  "auth": {
00:20:59.893  "state": "completed",
00:20:59.893  "digest": "sha384",
00:20:59.893  "dhgroup": "ffdhe2048"
00:20:59.893  }
00:20:59.893  }
00:20:59.893  ]'
00:20:59.893    00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:00.152   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:00.152    00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:00.152   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:00.152    00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:00.152   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:00.152   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:00.152   00:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:00.411   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:00.411   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:00.978  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:00.978   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:00.979   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:00.979   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:00.979   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:00.979   00:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:01.236  
00:21:01.236    00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:01.236    00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:01.236    00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:01.495   00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:01.495    00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:01.495    00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:01.495    00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:01.495    00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:01.495   00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:01.495  {
00:21:01.495  "cntlid": 61,
00:21:01.495  "qid": 0,
00:21:01.495  "state": "enabled",
00:21:01.495  "thread": "nvmf_tgt_poll_group_000",
00:21:01.495  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:01.495  "listen_address": {
00:21:01.495  "trtype": "TCP",
00:21:01.495  "adrfam": "IPv4",
00:21:01.495  "traddr": "10.0.0.2",
00:21:01.495  "trsvcid": "4420"
00:21:01.495  },
00:21:01.495  "peer_address": {
00:21:01.495  "trtype": "TCP",
00:21:01.495  "adrfam": "IPv4",
00:21:01.495  "traddr": "10.0.0.1",
00:21:01.495  "trsvcid": "48808"
00:21:01.495  },
00:21:01.495  "auth": {
00:21:01.495  "state": "completed",
00:21:01.495  "digest": "sha384",
00:21:01.495  "dhgroup": "ffdhe2048"
00:21:01.495  }
00:21:01.495  }
00:21:01.495  ]'
00:21:01.495    00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:01.495   00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:01.495    00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:01.754   00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:01.754    00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:01.754   00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:01.754   00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:01.754   00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:02.012   00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:02.012   00:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:02.580  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:02.580   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:02.839  
00:21:02.839    00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:02.839    00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:02.839    00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:03.098   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:03.098    00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:03.098    00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:03.098    00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:03.098    00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:03.098   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:03.098  {
00:21:03.098  "cntlid": 63,
00:21:03.098  "qid": 0,
00:21:03.098  "state": "enabled",
00:21:03.098  "thread": "nvmf_tgt_poll_group_000",
00:21:03.098  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:03.098  "listen_address": {
00:21:03.098  "trtype": "TCP",
00:21:03.098  "adrfam": "IPv4",
00:21:03.098  "traddr": "10.0.0.2",
00:21:03.098  "trsvcid": "4420"
00:21:03.098  },
00:21:03.098  "peer_address": {
00:21:03.098  "trtype": "TCP",
00:21:03.098  "adrfam": "IPv4",
00:21:03.098  "traddr": "10.0.0.1",
00:21:03.098  "trsvcid": "48828"
00:21:03.098  },
00:21:03.098  "auth": {
00:21:03.098  "state": "completed",
00:21:03.098  "digest": "sha384",
00:21:03.098  "dhgroup": "ffdhe2048"
00:21:03.098  }
00:21:03.098  }
00:21:03.098  ]'
00:21:03.098    00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:03.098   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:03.098    00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:03.098   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:03.098    00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:03.098   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:03.098   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:03.098   00:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:03.356   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:03.356   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:03.923   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:03.923  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:03.923   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:03.923   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:03.923   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:03.923   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:03.923   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:03.923   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:03.923   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:03.923   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:04.181   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0
00:21:04.181   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:04.181   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:04.181   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:04.181   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:04.181   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:04.182   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:04.182   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:04.182   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:04.182   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:04.182   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:04.182   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:04.182   00:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:04.440  
00:21:04.440    00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:04.440    00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:04.440    00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:04.697   00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:04.697    00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:04.697    00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:04.697    00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:04.697    00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:04.698   00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:04.698  {
00:21:04.698  "cntlid": 65,
00:21:04.698  "qid": 0,
00:21:04.698  "state": "enabled",
00:21:04.698  "thread": "nvmf_tgt_poll_group_000",
00:21:04.698  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:04.698  "listen_address": {
00:21:04.698  "trtype": "TCP",
00:21:04.698  "adrfam": "IPv4",
00:21:04.698  "traddr": "10.0.0.2",
00:21:04.698  "trsvcid": "4420"
00:21:04.698  },
00:21:04.698  "peer_address": {
00:21:04.698  "trtype": "TCP",
00:21:04.698  "adrfam": "IPv4",
00:21:04.698  "traddr": "10.0.0.1",
00:21:04.698  "trsvcid": "48852"
00:21:04.698  },
00:21:04.698  "auth": {
00:21:04.698  "state": "completed",
00:21:04.698  "digest": "sha384",
00:21:04.698  "dhgroup": "ffdhe3072"
00:21:04.698  }
00:21:04.698  }
00:21:04.698  ]'
00:21:04.698    00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:04.698   00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:04.698    00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:04.698   00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:04.698    00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:04.956   00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:04.956   00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:04.956   00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:04.956   00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:04.956   00:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:05.522   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:05.522  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:05.522   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:05.522   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:05.522   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:05.522   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:05.523   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:05.523   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:05.523   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:05.781   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1
00:21:05.781   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:05.781   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:05.781   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:05.781   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:05.781   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:05.781   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:05.781   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:05.781   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:05.781   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:05.781   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:05.781   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:05.781   00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:06.038  
00:21:06.038    00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:06.038    00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:06.038    00:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:06.297   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:06.297    00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:06.297    00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:06.297    00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:06.297    00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:06.297   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:06.297  {
00:21:06.297  "cntlid": 67,
00:21:06.297  "qid": 0,
00:21:06.297  "state": "enabled",
00:21:06.297  "thread": "nvmf_tgt_poll_group_000",
00:21:06.297  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:06.297  "listen_address": {
00:21:06.297  "trtype": "TCP",
00:21:06.297  "adrfam": "IPv4",
00:21:06.297  "traddr": "10.0.0.2",
00:21:06.297  "trsvcid": "4420"
00:21:06.297  },
00:21:06.297  "peer_address": {
00:21:06.297  "trtype": "TCP",
00:21:06.297  "adrfam": "IPv4",
00:21:06.297  "traddr": "10.0.0.1",
00:21:06.297  "trsvcid": "48890"
00:21:06.297  },
00:21:06.297  "auth": {
00:21:06.297  "state": "completed",
00:21:06.297  "digest": "sha384",
00:21:06.297  "dhgroup": "ffdhe3072"
00:21:06.297  }
00:21:06.297  }
00:21:06.297  ]'
00:21:06.297    00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:06.297   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:06.297    00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:06.297   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:06.298    00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:06.298   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:06.298   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:06.298   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:06.556   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:06.556   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:07.125   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:07.125  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:07.125   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:07.125   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:07.125   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:07.125   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:07.125   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:07.125   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:07.125   00:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:07.383   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2
00:21:07.383   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:07.383   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:07.383   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:07.383   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:07.383   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:07.383   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:07.383   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:07.383   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:07.383   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:07.383   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:07.383   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:07.383   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:07.642  
00:21:07.642    00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:07.642    00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:07.642    00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:07.900   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:07.900    00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:07.900    00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:07.900    00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:07.900    00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:07.900   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:07.900  {
00:21:07.900  "cntlid": 69,
00:21:07.900  "qid": 0,
00:21:07.900  "state": "enabled",
00:21:07.900  "thread": "nvmf_tgt_poll_group_000",
00:21:07.900  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:07.900  "listen_address": {
00:21:07.900  "trtype": "TCP",
00:21:07.900  "adrfam": "IPv4",
00:21:07.900  "traddr": "10.0.0.2",
00:21:07.900  "trsvcid": "4420"
00:21:07.900  },
00:21:07.900  "peer_address": {
00:21:07.900  "trtype": "TCP",
00:21:07.900  "adrfam": "IPv4",
00:21:07.900  "traddr": "10.0.0.1",
00:21:07.900  "trsvcid": "48926"
00:21:07.900  },
00:21:07.900  "auth": {
00:21:07.900  "state": "completed",
00:21:07.900  "digest": "sha384",
00:21:07.900  "dhgroup": "ffdhe3072"
00:21:07.900  }
00:21:07.900  }
00:21:07.900  ]'
00:21:07.901    00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:07.901   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:07.901    00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:07.901   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:07.901    00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:07.901   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:07.901   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:07.901   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:08.158   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:08.158   00:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:08.750   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:08.750  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:08.750   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:08.750   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:08.750   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:08.750   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:08.750   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:08.750   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:08.750   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:21:09.009   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3
00:21:09.009   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:09.009   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:09.009   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:09.009   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:09.009   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:09.009   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:21:09.009   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:09.009   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:09.009   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:09.009   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:09.009   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:09.009   00:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:09.267  
00:21:09.267    00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:09.267    00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:09.267    00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:09.529   00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:09.529    00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:09.529    00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:09.529    00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:09.529    00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:09.529   00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:09.529  {
00:21:09.529  "cntlid": 71,
00:21:09.529  "qid": 0,
00:21:09.529  "state": "enabled",
00:21:09.529  "thread": "nvmf_tgt_poll_group_000",
00:21:09.529  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:09.529  "listen_address": {
00:21:09.529  "trtype": "TCP",
00:21:09.529  "adrfam": "IPv4",
00:21:09.529  "traddr": "10.0.0.2",
00:21:09.529  "trsvcid": "4420"
00:21:09.529  },
00:21:09.529  "peer_address": {
00:21:09.529  "trtype": "TCP",
00:21:09.529  "adrfam": "IPv4",
00:21:09.529  "traddr": "10.0.0.1",
00:21:09.529  "trsvcid": "48950"
00:21:09.529  },
00:21:09.529  "auth": {
00:21:09.529  "state": "completed",
00:21:09.529  "digest": "sha384",
00:21:09.529  "dhgroup": "ffdhe3072"
00:21:09.529  }
00:21:09.529  }
00:21:09.529  ]'
00:21:09.529    00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:09.529   00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:09.529    00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:09.529   00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:09.529    00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:09.529   00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:09.529   00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:09.529   00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:09.788   00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:09.788   00:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:10.354   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:10.354  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:10.354   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:10.355   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:10.355   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:10.355   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:10.355   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:10.355   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:10.355   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:21:10.355   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:21:10.613   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0
00:21:10.613   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:10.613   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:10.613   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:21:10.613   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:10.613   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:10.613   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:10.613   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:10.613   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:10.613   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:10.613   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:10.613   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:10.613   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:10.872  
00:21:10.872    00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:10.872    00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:10.872    00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:11.131   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:11.131    00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:11.131    00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.131    00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:11.131    00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.131   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:11.131  {
00:21:11.131  "cntlid": 73,
00:21:11.131  "qid": 0,
00:21:11.131  "state": "enabled",
00:21:11.131  "thread": "nvmf_tgt_poll_group_000",
00:21:11.131  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:11.131  "listen_address": {
00:21:11.131  "trtype": "TCP",
00:21:11.131  "adrfam": "IPv4",
00:21:11.131  "traddr": "10.0.0.2",
00:21:11.131  "trsvcid": "4420"
00:21:11.131  },
00:21:11.131  "peer_address": {
00:21:11.131  "trtype": "TCP",
00:21:11.131  "adrfam": "IPv4",
00:21:11.131  "traddr": "10.0.0.1",
00:21:11.131  "trsvcid": "48972"
00:21:11.131  },
00:21:11.131  "auth": {
00:21:11.131  "state": "completed",
00:21:11.131  "digest": "sha384",
00:21:11.131  "dhgroup": "ffdhe4096"
00:21:11.131  }
00:21:11.131  }
00:21:11.131  ]'
00:21:11.131    00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:11.131   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:11.131    00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:11.131   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:21:11.131    00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:11.131   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:11.131   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:11.131   00:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:11.389   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:11.389   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:11.955   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:11.955  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:11.955   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:11.955   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:11.955   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:11.955   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:11.955   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:11.955   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:21:11.955   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:21:12.214   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1
00:21:12.214   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:12.214   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:12.214   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:21:12.214   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:12.214   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:12.214   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:12.214   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:12.214   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:12.214   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:12.214   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:12.214   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:12.214   00:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:12.472  
00:21:12.472    00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:12.473    00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:12.473    00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:12.731   00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:12.731    00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:12.731    00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:12.731    00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:12.731    00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:12.731   00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:12.731  {
00:21:12.731  "cntlid": 75,
00:21:12.731  "qid": 0,
00:21:12.731  "state": "enabled",
00:21:12.731  "thread": "nvmf_tgt_poll_group_000",
00:21:12.731  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:12.731  "listen_address": {
00:21:12.731  "trtype": "TCP",
00:21:12.731  "adrfam": "IPv4",
00:21:12.731  "traddr": "10.0.0.2",
00:21:12.731  "trsvcid": "4420"
00:21:12.731  },
00:21:12.731  "peer_address": {
00:21:12.731  "trtype": "TCP",
00:21:12.731  "adrfam": "IPv4",
00:21:12.731  "traddr": "10.0.0.1",
00:21:12.731  "trsvcid": "47830"
00:21:12.731  },
00:21:12.731  "auth": {
00:21:12.731  "state": "completed",
00:21:12.731  "digest": "sha384",
00:21:12.731  "dhgroup": "ffdhe4096"
00:21:12.731  }
00:21:12.731  }
00:21:12.731  ]'
00:21:12.731    00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:12.731   00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:12.731    00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:12.731   00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:21:12.731    00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:12.731   00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:12.731   00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:12.731   00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:12.989   00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:12.989   00:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:13.553   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:13.553  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:13.553   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:13.553   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.553   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:13.553   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.553   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:13.553   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:21:13.553   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:21:13.810   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2
00:21:13.810   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:13.810   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:13.810   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:21:13.810   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:13.810   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:13.810   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:13.810   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:13.810   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:13.810   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:13.810   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:13.810   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:13.810   00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:14.069  
00:21:14.069    00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:14.069    00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:14.069    00:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:14.328   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:14.328    00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:14.328    00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:14.328    00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:14.328    00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:14.328   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:14.328  {
00:21:14.328  "cntlid": 77,
00:21:14.328  "qid": 0,
00:21:14.328  "state": "enabled",
00:21:14.328  "thread": "nvmf_tgt_poll_group_000",
00:21:14.328  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:14.328  "listen_address": {
00:21:14.328  "trtype": "TCP",
00:21:14.328  "adrfam": "IPv4",
00:21:14.328  "traddr": "10.0.0.2",
00:21:14.328  "trsvcid": "4420"
00:21:14.328  },
00:21:14.328  "peer_address": {
00:21:14.328  "trtype": "TCP",
00:21:14.328  "adrfam": "IPv4",
00:21:14.328  "traddr": "10.0.0.1",
00:21:14.328  "trsvcid": "47854"
00:21:14.328  },
00:21:14.328  "auth": {
00:21:14.328  "state": "completed",
00:21:14.328  "digest": "sha384",
00:21:14.328  "dhgroup": "ffdhe4096"
00:21:14.328  }
00:21:14.328  }
00:21:14.328  ]'
00:21:14.328    00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:14.328   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:14.328    00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:14.328   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:21:14.328    00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:14.328   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:14.328   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:14.328   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:14.586   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:14.587   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:15.156   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:15.156  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:15.156   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:15.156   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:15.156   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:15.156   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:15.156   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:15.156   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:21:15.156   00:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:21:15.415   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3
00:21:15.415   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:15.415   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:15.415   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:21:15.415   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:15.415   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:15.415   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:21:15.415   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:15.415   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:15.415   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:15.415   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:15.415   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:15.415   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:15.673  
00:21:15.673    00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:15.673    00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:15.673    00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:15.937   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:15.937    00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:15.937    00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:15.937    00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:15.937    00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:15.937   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:15.937  {
00:21:15.937  "cntlid": 79,
00:21:15.937  "qid": 0,
00:21:15.937  "state": "enabled",
00:21:15.937  "thread": "nvmf_tgt_poll_group_000",
00:21:15.937  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:15.937  "listen_address": {
00:21:15.937  "trtype": "TCP",
00:21:15.937  "adrfam": "IPv4",
00:21:15.937  "traddr": "10.0.0.2",
00:21:15.937  "trsvcid": "4420"
00:21:15.937  },
00:21:15.937  "peer_address": {
00:21:15.937  "trtype": "TCP",
00:21:15.937  "adrfam": "IPv4",
00:21:15.937  "traddr": "10.0.0.1",
00:21:15.937  "trsvcid": "47868"
00:21:15.937  },
00:21:15.937  "auth": {
00:21:15.937  "state": "completed",
00:21:15.937  "digest": "sha384",
00:21:15.937  "dhgroup": "ffdhe4096"
00:21:15.937  }
00:21:15.937  }
00:21:15.937  ]'
00:21:15.937    00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:15.937   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:15.937    00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:15.937   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:21:15.937    00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:15.937   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:15.937   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:15.937   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:16.197   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:16.197   00:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:16.765   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:16.765  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:16.765   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:16.765   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:16.765   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:16.765   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:16.765   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:16.765   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:16.765   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:21:16.765   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:21:17.024   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0
00:21:17.024   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:17.024   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:17.024   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:21:17.024   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:17.024   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:17.024   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:17.024   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:17.024   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:17.024   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:17.024   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:17.024   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:17.024   00:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:17.283  
00:21:17.283    00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:17.283    00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:17.283    00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:17.542   00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:17.542    00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:17.542    00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:17.542    00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:17.542    00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:17.542   00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:17.542  {
00:21:17.542  "cntlid": 81,
00:21:17.542  "qid": 0,
00:21:17.542  "state": "enabled",
00:21:17.542  "thread": "nvmf_tgt_poll_group_000",
00:21:17.542  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:17.542  "listen_address": {
00:21:17.542  "trtype": "TCP",
00:21:17.542  "adrfam": "IPv4",
00:21:17.542  "traddr": "10.0.0.2",
00:21:17.542  "trsvcid": "4420"
00:21:17.542  },
00:21:17.542  "peer_address": {
00:21:17.542  "trtype": "TCP",
00:21:17.542  "adrfam": "IPv4",
00:21:17.542  "traddr": "10.0.0.1",
00:21:17.542  "trsvcid": "47904"
00:21:17.542  },
00:21:17.542  "auth": {
00:21:17.542  "state": "completed",
00:21:17.542  "digest": "sha384",
00:21:17.542  "dhgroup": "ffdhe6144"
00:21:17.542  }
00:21:17.542  }
00:21:17.542  ]'
00:21:17.542    00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:17.542   00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:17.542    00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:17.542   00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:21:17.542    00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:17.542   00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:17.542   00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:17.542   00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:17.801   00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:17.801   00:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:18.367   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:18.367  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:18.367   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:18.367   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:18.367   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:18.367   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:18.367   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:18.367   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:21:18.367   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:21:18.626   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1
00:21:18.626   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:18.626   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:18.626   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:21:18.626   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:18.626   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:18.626   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:18.626   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:18.626   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:18.626   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:18.626   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:18.626   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:18.626   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:18.885  
00:21:18.885    00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:18.885    00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:18.885    00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:19.144   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:19.144    00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:19.144    00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.144    00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:19.144    00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:19.144   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:19.144  {
00:21:19.144  "cntlid": 83,
00:21:19.144  "qid": 0,
00:21:19.144  "state": "enabled",
00:21:19.144  "thread": "nvmf_tgt_poll_group_000",
00:21:19.144  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:19.144  "listen_address": {
00:21:19.144  "trtype": "TCP",
00:21:19.144  "adrfam": "IPv4",
00:21:19.144  "traddr": "10.0.0.2",
00:21:19.144  "trsvcid": "4420"
00:21:19.144  },
00:21:19.144  "peer_address": {
00:21:19.144  "trtype": "TCP",
00:21:19.144  "adrfam": "IPv4",
00:21:19.144  "traddr": "10.0.0.1",
00:21:19.144  "trsvcid": "47920"
00:21:19.144  },
00:21:19.144  "auth": {
00:21:19.144  "state": "completed",
00:21:19.144  "digest": "sha384",
00:21:19.144  "dhgroup": "ffdhe6144"
00:21:19.144  }
00:21:19.144  }
00:21:19.144  ]'
00:21:19.144    00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:19.144   00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:19.144    00:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:19.402   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:21:19.402    00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:19.402   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:19.402   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:19.402   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:19.402   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:19.402   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:19.968   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:19.968  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:19.968   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:19.968   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:19.968   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:20.230   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:20.230   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:20.230   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:21:20.230   00:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:21:20.230   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2
00:21:20.230   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:20.230   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:20.230   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:21:20.230   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:20.230   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:20.230   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:20.230   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:20.230   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:20.230   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:20.230   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:20.230   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:20.230   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:20.800  
00:21:20.800    00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:20.800    00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:20.800    00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:20.800   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:20.800    00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:20.800    00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:20.800    00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:20.800    00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:20.800   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:20.800  {
00:21:20.800  "cntlid": 85,
00:21:20.800  "qid": 0,
00:21:20.800  "state": "enabled",
00:21:20.800  "thread": "nvmf_tgt_poll_group_000",
00:21:20.800  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:20.800  "listen_address": {
00:21:20.800  "trtype": "TCP",
00:21:20.800  "adrfam": "IPv4",
00:21:20.800  "traddr": "10.0.0.2",
00:21:20.800  "trsvcid": "4420"
00:21:20.800  },
00:21:20.800  "peer_address": {
00:21:20.800  "trtype": "TCP",
00:21:20.800  "adrfam": "IPv4",
00:21:20.800  "traddr": "10.0.0.1",
00:21:20.800  "trsvcid": "47942"
00:21:20.800  },
00:21:20.800  "auth": {
00:21:20.800  "state": "completed",
00:21:20.800  "digest": "sha384",
00:21:20.800  "dhgroup": "ffdhe6144"
00:21:20.800  }
00:21:20.800  }
00:21:20.800  ]'
00:21:20.800    00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:20.800   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:20.800    00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:21.059   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:21:21.059    00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:21.059   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:21.059   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:21.059   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:21.318   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:21.318   00:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:21.886  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:21.886   00:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:22.454  
00:21:22.454    00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:22.454    00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:22.454    00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:22.455   00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:22.455    00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:22.455    00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:22.455    00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:22.455    00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:22.455   00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:22.455  {
00:21:22.455  "cntlid": 87,
00:21:22.455  "qid": 0,
00:21:22.455  "state": "enabled",
00:21:22.455  "thread": "nvmf_tgt_poll_group_000",
00:21:22.455  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:22.455  "listen_address": {
00:21:22.455  "trtype": "TCP",
00:21:22.455  "adrfam": "IPv4",
00:21:22.455  "traddr": "10.0.0.2",
00:21:22.455  "trsvcid": "4420"
00:21:22.455  },
00:21:22.455  "peer_address": {
00:21:22.455  "trtype": "TCP",
00:21:22.455  "adrfam": "IPv4",
00:21:22.455  "traddr": "10.0.0.1",
00:21:22.455  "trsvcid": "60822"
00:21:22.455  },
00:21:22.455  "auth": {
00:21:22.455  "state": "completed",
00:21:22.455  "digest": "sha384",
00:21:22.455  "dhgroup": "ffdhe6144"
00:21:22.455  }
00:21:22.455  }
00:21:22.455  ]'
00:21:22.455    00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:22.455   00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:22.455    00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:22.713   00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:21:22.713    00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:22.713   00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:22.713   00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:22.713   00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:22.972   00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:22.972   00:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:23.539  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:23.539   00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:24.107  
00:21:24.107    00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:24.107    00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:24.107    00:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:24.365   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:24.365    00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:24.365    00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:24.365    00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:24.365    00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:24.365   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:24.365  {
00:21:24.365  "cntlid": 89,
00:21:24.365  "qid": 0,
00:21:24.366  "state": "enabled",
00:21:24.366  "thread": "nvmf_tgt_poll_group_000",
00:21:24.366  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:24.366  "listen_address": {
00:21:24.366  "trtype": "TCP",
00:21:24.366  "adrfam": "IPv4",
00:21:24.366  "traddr": "10.0.0.2",
00:21:24.366  "trsvcid": "4420"
00:21:24.366  },
00:21:24.366  "peer_address": {
00:21:24.366  "trtype": "TCP",
00:21:24.366  "adrfam": "IPv4",
00:21:24.366  "traddr": "10.0.0.1",
00:21:24.366  "trsvcid": "60842"
00:21:24.366  },
00:21:24.366  "auth": {
00:21:24.366  "state": "completed",
00:21:24.366  "digest": "sha384",
00:21:24.366  "dhgroup": "ffdhe8192"
00:21:24.366  }
00:21:24.366  }
00:21:24.366  ]'
00:21:24.366    00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:24.366   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:24.366    00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:24.366   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:21:24.366    00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:24.366   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:24.366   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:24.366   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:24.624   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:24.624   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:25.191   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:25.191  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:25.191   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:25.191   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.191   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:25.191   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.191   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:25.191   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:21:25.191   00:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:21:25.449   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1
00:21:25.449   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:25.449   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:25.449   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:21:25.449   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:25.449   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:25.449   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:25.449   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:25.449   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:25.450   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:25.450   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:25.450   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:25.450   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:26.019  
00:21:26.019    00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:26.019    00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:26.019    00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:26.019   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:26.019    00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:26.019    00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:26.019    00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:26.019    00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:26.019   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:26.019  {
00:21:26.019  "cntlid": 91,
00:21:26.019  "qid": 0,
00:21:26.019  "state": "enabled",
00:21:26.019  "thread": "nvmf_tgt_poll_group_000",
00:21:26.019  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:26.019  "listen_address": {
00:21:26.019  "trtype": "TCP",
00:21:26.019  "adrfam": "IPv4",
00:21:26.019  "traddr": "10.0.0.2",
00:21:26.019  "trsvcid": "4420"
00:21:26.019  },
00:21:26.019  "peer_address": {
00:21:26.019  "trtype": "TCP",
00:21:26.019  "adrfam": "IPv4",
00:21:26.019  "traddr": "10.0.0.1",
00:21:26.019  "trsvcid": "60856"
00:21:26.019  },
00:21:26.019  "auth": {
00:21:26.019  "state": "completed",
00:21:26.019  "digest": "sha384",
00:21:26.019  "dhgroup": "ffdhe8192"
00:21:26.019  }
00:21:26.019  }
00:21:26.019  ]'
00:21:26.019    00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:26.277   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:26.277    00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:26.277   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:21:26.277    00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:26.277   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:26.277   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:26.277   00:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:26.535   00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:26.535   00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:27.103   00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:27.103  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:27.103   00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:27.103   00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:27.103   00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:27.103   00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:27.103   00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:27.103   00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:21:27.103   00:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:21:27.362   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2
00:21:27.362   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:27.362   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:27.362   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:21:27.362   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:27.362   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:27.362   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:27.362   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:27.362   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:27.362   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:27.362   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:27.362   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:27.362   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:27.929  
00:21:27.929    00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:27.929    00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:27.929    00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:27.929   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:27.929    00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:27.929    00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:27.929    00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:27.929    00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:27.929   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:27.929  {
00:21:27.929  "cntlid": 93,
00:21:27.929  "qid": 0,
00:21:27.929  "state": "enabled",
00:21:27.929  "thread": "nvmf_tgt_poll_group_000",
00:21:27.929  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:27.929  "listen_address": {
00:21:27.929  "trtype": "TCP",
00:21:27.929  "adrfam": "IPv4",
00:21:27.929  "traddr": "10.0.0.2",
00:21:27.929  "trsvcid": "4420"
00:21:27.929  },
00:21:27.929  "peer_address": {
00:21:27.929  "trtype": "TCP",
00:21:27.929  "adrfam": "IPv4",
00:21:27.929  "traddr": "10.0.0.1",
00:21:27.929  "trsvcid": "60880"
00:21:27.929  },
00:21:27.929  "auth": {
00:21:27.929  "state": "completed",
00:21:27.929  "digest": "sha384",
00:21:27.929  "dhgroup": "ffdhe8192"
00:21:27.929  }
00:21:27.929  }
00:21:27.929  ]'
00:21:27.929    00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:28.188   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:28.188    00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:28.188   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:21:28.188    00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:28.188   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:28.188   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:28.188   00:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:28.445   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:28.445   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:29.012  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:29.012   00:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:29.580  
00:21:29.580    00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:29.580    00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:29.580    00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:29.839   00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:29.839    00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:29.839    00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:29.839    00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:29.839    00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:29.839   00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:29.839  {
00:21:29.839  "cntlid": 95,
00:21:29.839  "qid": 0,
00:21:29.839  "state": "enabled",
00:21:29.839  "thread": "nvmf_tgt_poll_group_000",
00:21:29.839  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:29.839  "listen_address": {
00:21:29.839  "trtype": "TCP",
00:21:29.839  "adrfam": "IPv4",
00:21:29.839  "traddr": "10.0.0.2",
00:21:29.839  "trsvcid": "4420"
00:21:29.839  },
00:21:29.839  "peer_address": {
00:21:29.839  "trtype": "TCP",
00:21:29.839  "adrfam": "IPv4",
00:21:29.839  "traddr": "10.0.0.1",
00:21:29.839  "trsvcid": "60904"
00:21:29.839  },
00:21:29.839  "auth": {
00:21:29.839  "state": "completed",
00:21:29.840  "digest": "sha384",
00:21:29.840  "dhgroup": "ffdhe8192"
00:21:29.840  }
00:21:29.840  }
00:21:29.840  ]'
00:21:29.840    00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:29.840   00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]]
00:21:29.840    00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:29.840   00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:21:29.840    00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:29.840   00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:29.840   00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:29.840   00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:30.099   00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:30.099   00:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:30.663   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:30.663  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:30.663   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:30.663   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:30.663   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:30.663   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:30.663   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}"
00:21:30.663   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:30.663   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:30.663   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:21:30.663   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:21:30.922   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0
00:21:30.922   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:30.922   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:30.922   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:21:30.922   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:30.922   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:30.922   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:30.922   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:30.922   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:30.922   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:30.922   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:30.922   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:30.922   00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:31.180  
00:21:31.180    00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:31.180    00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:31.180    00:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:31.438   00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:31.438    00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:31.438    00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:31.439    00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:31.439    00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:31.439   00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:31.439  {
00:21:31.439  "cntlid": 97,
00:21:31.439  "qid": 0,
00:21:31.439  "state": "enabled",
00:21:31.439  "thread": "nvmf_tgt_poll_group_000",
00:21:31.439  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:31.439  "listen_address": {
00:21:31.439  "trtype": "TCP",
00:21:31.439  "adrfam": "IPv4",
00:21:31.439  "traddr": "10.0.0.2",
00:21:31.439  "trsvcid": "4420"
00:21:31.439  },
00:21:31.439  "peer_address": {
00:21:31.439  "trtype": "TCP",
00:21:31.439  "adrfam": "IPv4",
00:21:31.439  "traddr": "10.0.0.1",
00:21:31.439  "trsvcid": "40694"
00:21:31.439  },
00:21:31.439  "auth": {
00:21:31.439  "state": "completed",
00:21:31.439  "digest": "sha512",
00:21:31.439  "dhgroup": "null"
00:21:31.439  }
00:21:31.439  }
00:21:31.439  ]'
00:21:31.439    00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:31.439   00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:31.439    00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:31.439   00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:21:31.439    00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:31.439   00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:31.439   00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:31.439   00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:31.697   00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:31.698   00:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:32.265   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:32.265  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:32.265   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:32.265   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:32.265   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:32.265   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:32.265   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:32.265   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:21:32.265   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:21:32.524   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1
00:21:32.524   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:32.524   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:32.524   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:21:32.524   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:32.524   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:32.524   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:32.524   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:32.524   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:32.524   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:32.524   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:32.524   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:32.524   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:32.784  
00:21:32.784    00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:32.784    00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:32.784    00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:33.043   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:33.043    00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:33.043    00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:33.043    00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:33.043    00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:33.043   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:33.043  {
00:21:33.043  "cntlid": 99,
00:21:33.044  "qid": 0,
00:21:33.044  "state": "enabled",
00:21:33.044  "thread": "nvmf_tgt_poll_group_000",
00:21:33.044  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:33.044  "listen_address": {
00:21:33.044  "trtype": "TCP",
00:21:33.044  "adrfam": "IPv4",
00:21:33.044  "traddr": "10.0.0.2",
00:21:33.044  "trsvcid": "4420"
00:21:33.044  },
00:21:33.044  "peer_address": {
00:21:33.044  "trtype": "TCP",
00:21:33.044  "adrfam": "IPv4",
00:21:33.044  "traddr": "10.0.0.1",
00:21:33.044  "trsvcid": "40724"
00:21:33.044  },
00:21:33.044  "auth": {
00:21:33.044  "state": "completed",
00:21:33.044  "digest": "sha512",
00:21:33.044  "dhgroup": "null"
00:21:33.044  }
00:21:33.044  }
00:21:33.044  ]'
00:21:33.044    00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:33.044   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:33.044    00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:33.044   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:21:33.044    00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:33.044   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:33.044   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:33.044   00:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:33.303   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:33.303   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:33.901   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:33.901  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:33.901   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:33.901   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:33.901   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:33.901   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:33.901   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:33.901   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:21:33.901   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:21:34.217   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2
00:21:34.217   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:34.217   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:34.217   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:21:34.217   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:34.217   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:34.217   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:34.217   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:34.217   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:34.217   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:34.217   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:34.217   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:34.217   00:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:34.217  
00:21:34.529    00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:34.529    00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:34.529    00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:34.529   00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:34.529    00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:34.529    00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:34.529    00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:34.529    00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:34.529   00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:34.529  {
00:21:34.529  "cntlid": 101,
00:21:34.529  "qid": 0,
00:21:34.529  "state": "enabled",
00:21:34.529  "thread": "nvmf_tgt_poll_group_000",
00:21:34.529  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:34.529  "listen_address": {
00:21:34.529  "trtype": "TCP",
00:21:34.529  "adrfam": "IPv4",
00:21:34.529  "traddr": "10.0.0.2",
00:21:34.529  "trsvcid": "4420"
00:21:34.529  },
00:21:34.529  "peer_address": {
00:21:34.529  "trtype": "TCP",
00:21:34.529  "adrfam": "IPv4",
00:21:34.529  "traddr": "10.0.0.1",
00:21:34.529  "trsvcid": "40748"
00:21:34.529  },
00:21:34.529  "auth": {
00:21:34.529  "state": "completed",
00:21:34.529  "digest": "sha512",
00:21:34.529  "dhgroup": "null"
00:21:34.529  }
00:21:34.529  }
00:21:34.529  ]'
00:21:34.529    00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:34.529   00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:34.529    00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:34.787   00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:21:34.787    00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:34.787   00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:34.787   00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:34.787   00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:34.787   00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:34.787   00:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:35.354   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:35.355  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:35.614   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:35.872  
00:21:35.872    00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:35.872    00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:35.872    00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:36.131   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:36.131    00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:36.131    00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:36.131    00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:36.131    00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:36.131   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:36.131  {
00:21:36.131  "cntlid": 103,
00:21:36.131  "qid": 0,
00:21:36.131  "state": "enabled",
00:21:36.131  "thread": "nvmf_tgt_poll_group_000",
00:21:36.131  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:36.131  "listen_address": {
00:21:36.132  "trtype": "TCP",
00:21:36.132  "adrfam": "IPv4",
00:21:36.132  "traddr": "10.0.0.2",
00:21:36.132  "trsvcid": "4420"
00:21:36.132  },
00:21:36.132  "peer_address": {
00:21:36.132  "trtype": "TCP",
00:21:36.132  "adrfam": "IPv4",
00:21:36.132  "traddr": "10.0.0.1",
00:21:36.132  "trsvcid": "40772"
00:21:36.132  },
00:21:36.132  "auth": {
00:21:36.132  "state": "completed",
00:21:36.132  "digest": "sha512",
00:21:36.132  "dhgroup": "null"
00:21:36.132  }
00:21:36.132  }
00:21:36.132  ]'
00:21:36.132    00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:36.132   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:36.132    00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:36.132   00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]]
00:21:36.132    00:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:36.390   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:36.390   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:36.390   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:36.390   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:36.390   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:36.958   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:36.958  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:36.958   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:36.958   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:36.958   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:36.958   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:36.958   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:36.958   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:36.958   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:21:36.958   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:21:37.217   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0
00:21:37.217   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:37.217   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:37.217   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:37.217   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:37.217   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:37.217   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:37.217   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:37.217   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:37.217   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:37.217   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:37.217   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:37.217   00:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:37.475  
00:21:37.475    00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:37.475    00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:37.475    00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:37.734   00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:37.734    00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:37.734    00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:37.734    00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:37.734    00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:37.734   00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:37.734  {
00:21:37.734  "cntlid": 105,
00:21:37.734  "qid": 0,
00:21:37.734  "state": "enabled",
00:21:37.734  "thread": "nvmf_tgt_poll_group_000",
00:21:37.734  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:37.734  "listen_address": {
00:21:37.734  "trtype": "TCP",
00:21:37.734  "adrfam": "IPv4",
00:21:37.734  "traddr": "10.0.0.2",
00:21:37.734  "trsvcid": "4420"
00:21:37.734  },
00:21:37.734  "peer_address": {
00:21:37.734  "trtype": "TCP",
00:21:37.734  "adrfam": "IPv4",
00:21:37.734  "traddr": "10.0.0.1",
00:21:37.734  "trsvcid": "40792"
00:21:37.734  },
00:21:37.734  "auth": {
00:21:37.734  "state": "completed",
00:21:37.734  "digest": "sha512",
00:21:37.734  "dhgroup": "ffdhe2048"
00:21:37.734  }
00:21:37.734  }
00:21:37.734  ]'
00:21:37.734    00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:37.734   00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:37.734    00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:37.734   00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:37.734    00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:37.734   00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:37.734   00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:37.734   00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:37.993   00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:37.993   00:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:38.562   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:38.562  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:38.562   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:38.562   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:38.562   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:38.562   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:38.562   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:38.562   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:21:38.562   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:21:38.820   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1
00:21:38.820   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:38.820   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:38.820   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:38.820   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:38.820   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:38.820   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:38.820   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:38.820   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:38.820   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:38.820   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:38.820   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:38.821   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:39.079  
00:21:39.079    00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:39.079    00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:39.079    00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:39.337   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:39.337    00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:39.337    00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:39.337    00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:39.337    00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:39.337   00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:39.337  {
00:21:39.337  "cntlid": 107,
00:21:39.337  "qid": 0,
00:21:39.337  "state": "enabled",
00:21:39.337  "thread": "nvmf_tgt_poll_group_000",
00:21:39.337  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:39.337  "listen_address": {
00:21:39.337  "trtype": "TCP",
00:21:39.337  "adrfam": "IPv4",
00:21:39.337  "traddr": "10.0.0.2",
00:21:39.337  "trsvcid": "4420"
00:21:39.337  },
00:21:39.337  "peer_address": {
00:21:39.337  "trtype": "TCP",
00:21:39.337  "adrfam": "IPv4",
00:21:39.337  "traddr": "10.0.0.1",
00:21:39.337  "trsvcid": "40812"
00:21:39.337  },
00:21:39.337  "auth": {
00:21:39.337  "state": "completed",
00:21:39.337  "digest": "sha512",
00:21:39.337  "dhgroup": "ffdhe2048"
00:21:39.337  }
00:21:39.337  }
00:21:39.337  ]'
00:21:39.337    00:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:39.337   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:39.337    00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:39.337   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:39.337    00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:39.337   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:39.337   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:39.337   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:39.596   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:39.596   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:40.163   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:40.163  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:40.163   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:40.163   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:40.163   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:40.163   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:40.163   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:40.163   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:21:40.163   00:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:21:40.422   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2
00:21:40.422   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:40.422   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:40.422   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:40.422   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:40.422   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:40.422   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:40.422   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:40.422   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:40.422   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:40.422   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:40.422   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:40.422   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:40.681  
00:21:40.681    00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:40.681    00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:40.681    00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:40.682   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:40.682    00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:40.682    00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:40.682    00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:40.682    00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:40.682   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:40.682  {
00:21:40.682  "cntlid": 109,
00:21:40.682  "qid": 0,
00:21:40.682  "state": "enabled",
00:21:40.682  "thread": "nvmf_tgt_poll_group_000",
00:21:40.682  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:40.682  "listen_address": {
00:21:40.682  "trtype": "TCP",
00:21:40.682  "adrfam": "IPv4",
00:21:40.682  "traddr": "10.0.0.2",
00:21:40.682  "trsvcid": "4420"
00:21:40.682  },
00:21:40.682  "peer_address": {
00:21:40.682  "trtype": "TCP",
00:21:40.682  "adrfam": "IPv4",
00:21:40.682  "traddr": "10.0.0.1",
00:21:40.682  "trsvcid": "40848"
00:21:40.682  },
00:21:40.682  "auth": {
00:21:40.682  "state": "completed",
00:21:40.682  "digest": "sha512",
00:21:40.682  "dhgroup": "ffdhe2048"
00:21:40.682  }
00:21:40.682  }
00:21:40.682  ]'
00:21:40.941    00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:40.941   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:40.941    00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:40.941   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:40.941    00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:40.941   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:40.941   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:40.941   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:41.199   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:41.199   00:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:41.767   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:41.767  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:41.767   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:41.767   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:41.767   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:41.767   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:41.767   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:41.767   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:21:41.767   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:21:41.767   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3
00:21:41.767   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:41.767   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:41.768   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048
00:21:41.768   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:41.768   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:41.768   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:21:41.768   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:41.768   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:41.768   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:41.768   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:41.768   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:41.768   00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:42.026  
00:21:42.026    00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:42.026    00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:42.026    00:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:42.285   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:42.285    00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:42.285    00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:42.285    00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:42.285    00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:42.285   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:42.285  {
00:21:42.285  "cntlid": 111,
00:21:42.285  "qid": 0,
00:21:42.285  "state": "enabled",
00:21:42.285  "thread": "nvmf_tgt_poll_group_000",
00:21:42.285  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:42.285  "listen_address": {
00:21:42.285  "trtype": "TCP",
00:21:42.285  "adrfam": "IPv4",
00:21:42.285  "traddr": "10.0.0.2",
00:21:42.285  "trsvcid": "4420"
00:21:42.285  },
00:21:42.285  "peer_address": {
00:21:42.285  "trtype": "TCP",
00:21:42.285  "adrfam": "IPv4",
00:21:42.285  "traddr": "10.0.0.1",
00:21:42.285  "trsvcid": "37456"
00:21:42.285  },
00:21:42.285  "auth": {
00:21:42.285  "state": "completed",
00:21:42.285  "digest": "sha512",
00:21:42.285  "dhgroup": "ffdhe2048"
00:21:42.285  }
00:21:42.285  }
00:21:42.285  ]'
00:21:42.285    00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:42.285   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:42.285    00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:42.544   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]]
00:21:42.544    00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:42.544   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:42.544   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:42.544   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:42.544   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:42.544   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:43.111   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:43.111  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:43.111   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:43.111   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:43.111   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:43.111   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:43.111   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:43.111   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:43.111   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:21:43.111   00:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:21:43.370   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0
00:21:43.370   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:43.370   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:43.370   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:43.370   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:43.370   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:43.370   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:43.370   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:43.370   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:43.370   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:43.370   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:43.370   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:43.370   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:43.629  
00:21:43.629    00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:43.629    00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:43.629    00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:43.887   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:43.887    00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:43.887    00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:43.887    00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:43.887    00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:43.887   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:43.887  {
00:21:43.887  "cntlid": 113,
00:21:43.887  "qid": 0,
00:21:43.887  "state": "enabled",
00:21:43.887  "thread": "nvmf_tgt_poll_group_000",
00:21:43.887  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:43.887  "listen_address": {
00:21:43.887  "trtype": "TCP",
00:21:43.887  "adrfam": "IPv4",
00:21:43.887  "traddr": "10.0.0.2",
00:21:43.887  "trsvcid": "4420"
00:21:43.887  },
00:21:43.887  "peer_address": {
00:21:43.887  "trtype": "TCP",
00:21:43.887  "adrfam": "IPv4",
00:21:43.887  "traddr": "10.0.0.1",
00:21:43.887  "trsvcid": "37486"
00:21:43.887  },
00:21:43.887  "auth": {
00:21:43.887  "state": "completed",
00:21:43.887  "digest": "sha512",
00:21:43.887  "dhgroup": "ffdhe3072"
00:21:43.887  }
00:21:43.887  }
00:21:43.887  ]'
00:21:43.887    00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:43.887   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:43.887    00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:43.887   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:43.887    00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:44.145   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:44.145   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:44.145   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:44.146   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:44.146   00:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:44.711   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:44.711  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:44.711   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:44.711   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:44.711   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:44.969   00:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:45.228  
00:21:45.228    00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:45.228    00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:45.228    00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:45.487   00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:45.487    00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:45.487    00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:45.487    00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:45.487    00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:45.487   00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:45.487  {
00:21:45.487  "cntlid": 115,
00:21:45.487  "qid": 0,
00:21:45.487  "state": "enabled",
00:21:45.487  "thread": "nvmf_tgt_poll_group_000",
00:21:45.487  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:45.487  "listen_address": {
00:21:45.487  "trtype": "TCP",
00:21:45.487  "adrfam": "IPv4",
00:21:45.487  "traddr": "10.0.0.2",
00:21:45.487  "trsvcid": "4420"
00:21:45.487  },
00:21:45.487  "peer_address": {
00:21:45.487  "trtype": "TCP",
00:21:45.487  "adrfam": "IPv4",
00:21:45.487  "traddr": "10.0.0.1",
00:21:45.487  "trsvcid": "37530"
00:21:45.487  },
00:21:45.487  "auth": {
00:21:45.487  "state": "completed",
00:21:45.487  "digest": "sha512",
00:21:45.487  "dhgroup": "ffdhe3072"
00:21:45.487  }
00:21:45.487  }
00:21:45.487  ]'
00:21:45.487    00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:45.487   00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:45.487    00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:45.487   00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:45.487    00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:45.745   00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:45.745   00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:45.745   00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:45.745   00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:45.745   00:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:46.311   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:46.311  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:46.311   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:46.311   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:46.311   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:46.311   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:46.311   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:46.311   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:21:46.311   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:21:46.570   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2
00:21:46.570   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:46.570   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:46.570   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:46.570   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:46.570   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:46.570   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:46.570   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:46.570   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:46.570   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:46.570   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:46.570   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:46.570   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:46.828  
00:21:46.828    00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:46.828    00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:46.828    00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:47.086   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:47.086    00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:47.086    00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:47.086    00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:47.086    00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:47.086   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:47.086  {
00:21:47.086  "cntlid": 117,
00:21:47.086  "qid": 0,
00:21:47.086  "state": "enabled",
00:21:47.086  "thread": "nvmf_tgt_poll_group_000",
00:21:47.086  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:47.086  "listen_address": {
00:21:47.086  "trtype": "TCP",
00:21:47.086  "adrfam": "IPv4",
00:21:47.086  "traddr": "10.0.0.2",
00:21:47.086  "trsvcid": "4420"
00:21:47.086  },
00:21:47.086  "peer_address": {
00:21:47.086  "trtype": "TCP",
00:21:47.086  "adrfam": "IPv4",
00:21:47.086  "traddr": "10.0.0.1",
00:21:47.086  "trsvcid": "37576"
00:21:47.086  },
00:21:47.086  "auth": {
00:21:47.086  "state": "completed",
00:21:47.086  "digest": "sha512",
00:21:47.086  "dhgroup": "ffdhe3072"
00:21:47.086  }
00:21:47.086  }
00:21:47.086  ]'
00:21:47.087    00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:47.087   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:47.087    00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:47.087   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:47.087    00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:47.345   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:47.345   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:47.345   00:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:47.346   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:47.346   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:47.910   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:47.910  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:47.910   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:47.910   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:47.910   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:47.910   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:47.910   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:47.910   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:21:47.910   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:21:48.169   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3
00:21:48.169   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:48.169   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:48.169   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072
00:21:48.169   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:48.169   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:48.169   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:21:48.169   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:48.169   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:48.169   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:48.169   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:48.169   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:48.169   00:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:48.427  
00:21:48.427    00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:48.427    00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:48.427    00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:48.686   00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:48.686    00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:48.686    00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:48.686    00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:48.686    00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:48.686   00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:48.686  {
00:21:48.686  "cntlid": 119,
00:21:48.686  "qid": 0,
00:21:48.686  "state": "enabled",
00:21:48.686  "thread": "nvmf_tgt_poll_group_000",
00:21:48.686  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:48.686  "listen_address": {
00:21:48.686  "trtype": "TCP",
00:21:48.686  "adrfam": "IPv4",
00:21:48.686  "traddr": "10.0.0.2",
00:21:48.686  "trsvcid": "4420"
00:21:48.686  },
00:21:48.686  "peer_address": {
00:21:48.686  "trtype": "TCP",
00:21:48.686  "adrfam": "IPv4",
00:21:48.686  "traddr": "10.0.0.1",
00:21:48.686  "trsvcid": "37596"
00:21:48.686  },
00:21:48.686  "auth": {
00:21:48.686  "state": "completed",
00:21:48.686  "digest": "sha512",
00:21:48.686  "dhgroup": "ffdhe3072"
00:21:48.686  }
00:21:48.686  }
00:21:48.686  ]'
00:21:48.686    00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:48.686   00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:48.686    00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:48.686   00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]]
00:21:48.686    00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:48.945   00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:48.945   00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:48.945   00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:48.945   00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:48.945   00:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:49.509   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:49.509  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:49.509   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:49.509   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:49.509   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:49.509   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:49.509   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:49.509   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:49.509   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:49.767   00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:50.026  
00:21:50.026    00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:50.026    00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:50.026    00:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:50.285   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:50.285    00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:50.285    00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:50.285    00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:50.285    00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:50.285   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:50.285  {
00:21:50.285  "cntlid": 121,
00:21:50.285  "qid": 0,
00:21:50.285  "state": "enabled",
00:21:50.285  "thread": "nvmf_tgt_poll_group_000",
00:21:50.285  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:50.285  "listen_address": {
00:21:50.285  "trtype": "TCP",
00:21:50.285  "adrfam": "IPv4",
00:21:50.285  "traddr": "10.0.0.2",
00:21:50.285  "trsvcid": "4420"
00:21:50.285  },
00:21:50.285  "peer_address": {
00:21:50.285  "trtype": "TCP",
00:21:50.285  "adrfam": "IPv4",
00:21:50.285  "traddr": "10.0.0.1",
00:21:50.285  "trsvcid": "37626"
00:21:50.285  },
00:21:50.285  "auth": {
00:21:50.285  "state": "completed",
00:21:50.285  "digest": "sha512",
00:21:50.285  "dhgroup": "ffdhe4096"
00:21:50.285  }
00:21:50.285  }
00:21:50.285  ]'
00:21:50.285    00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:50.285   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:50.285    00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:50.544   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:21:50.544    00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:50.544   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:50.544   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:50.544   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:50.803   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:50.803   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:51.376   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:51.376  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:51.376   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:51.376   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:51.376   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:51.376   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:51.376   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:51.376   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:21:51.376   00:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:21:51.376   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1
00:21:51.376   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:51.376   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:51.376   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:21:51.376   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:51.376   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:51.376   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:51.376   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:51.376   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:51.376   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:51.376   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:51.377   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:51.377   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:51.636  
00:21:51.636    00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:51.636    00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:51.636    00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:51.894   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:51.894    00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:51.895    00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:51.895    00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:51.895    00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:51.895   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:51.895  {
00:21:51.895  "cntlid": 123,
00:21:51.895  "qid": 0,
00:21:51.895  "state": "enabled",
00:21:51.895  "thread": "nvmf_tgt_poll_group_000",
00:21:51.895  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:51.895  "listen_address": {
00:21:51.895  "trtype": "TCP",
00:21:51.895  "adrfam": "IPv4",
00:21:51.895  "traddr": "10.0.0.2",
00:21:51.895  "trsvcid": "4420"
00:21:51.895  },
00:21:51.895  "peer_address": {
00:21:51.895  "trtype": "TCP",
00:21:51.895  "adrfam": "IPv4",
00:21:51.895  "traddr": "10.0.0.1",
00:21:51.895  "trsvcid": "50970"
00:21:51.895  },
00:21:51.895  "auth": {
00:21:51.895  "state": "completed",
00:21:51.895  "digest": "sha512",
00:21:51.895  "dhgroup": "ffdhe4096"
00:21:51.895  }
00:21:51.895  }
00:21:51.895  ]'
00:21:51.895    00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:51.895   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:51.895    00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:52.152   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:21:52.152    00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:52.152   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:52.152   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:52.153   00:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:52.411   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:52.411   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:52.976  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:52.976   00:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:53.234  
00:21:53.234    00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:53.234    00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:53.234    00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:53.492   00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:53.492    00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:53.492    00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:53.492    00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:53.492    00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:53.492   00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:53.492  {
00:21:53.492  "cntlid": 125,
00:21:53.492  "qid": 0,
00:21:53.492  "state": "enabled",
00:21:53.492  "thread": "nvmf_tgt_poll_group_000",
00:21:53.492  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:53.492  "listen_address": {
00:21:53.492  "trtype": "TCP",
00:21:53.492  "adrfam": "IPv4",
00:21:53.492  "traddr": "10.0.0.2",
00:21:53.492  "trsvcid": "4420"
00:21:53.492  },
00:21:53.492  "peer_address": {
00:21:53.492  "trtype": "TCP",
00:21:53.492  "adrfam": "IPv4",
00:21:53.492  "traddr": "10.0.0.1",
00:21:53.492  "trsvcid": "50994"
00:21:53.492  },
00:21:53.492  "auth": {
00:21:53.492  "state": "completed",
00:21:53.492  "digest": "sha512",
00:21:53.492  "dhgroup": "ffdhe4096"
00:21:53.492  }
00:21:53.492  }
00:21:53.492  ]'
00:21:53.492    00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:53.492   00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:53.492    00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:53.750   00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:21:53.750    00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:53.750   00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:53.750   00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:53.750   00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:54.009   00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:54.009   00:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:54.577  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:54.577   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:21:54.836  
00:21:54.836    00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:55.095    00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:55.095    00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:55.095   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:55.095    00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:55.095    00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:55.095    00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:55.095    00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:55.095   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:55.095  {
00:21:55.095  "cntlid": 127,
00:21:55.095  "qid": 0,
00:21:55.095  "state": "enabled",
00:21:55.095  "thread": "nvmf_tgt_poll_group_000",
00:21:55.095  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:55.095  "listen_address": {
00:21:55.095  "trtype": "TCP",
00:21:55.095  "adrfam": "IPv4",
00:21:55.095  "traddr": "10.0.0.2",
00:21:55.095  "trsvcid": "4420"
00:21:55.095  },
00:21:55.095  "peer_address": {
00:21:55.095  "trtype": "TCP",
00:21:55.095  "adrfam": "IPv4",
00:21:55.095  "traddr": "10.0.0.1",
00:21:55.095  "trsvcid": "51030"
00:21:55.095  },
00:21:55.095  "auth": {
00:21:55.095  "state": "completed",
00:21:55.095  "digest": "sha512",
00:21:55.095  "dhgroup": "ffdhe4096"
00:21:55.095  }
00:21:55.095  }
00:21:55.095  ]'
00:21:55.095    00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:55.095   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:55.095    00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:55.355   00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]]
00:21:55.355    00:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:55.355   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:55.355   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:55.355   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:55.616   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:55.616   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:21:56.188   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:56.188  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:56.188   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:56.188   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:56.188   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:56.188   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:56.188   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:21:56.188   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:56.188   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:21:56.188   00:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:21:56.188   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0
00:21:56.188   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:56.188   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:56.188   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:21:56.188   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:21:56.188   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:56.188   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:56.188   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:56.188   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:56.188   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:56.188   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:56.188   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:56.188   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:21:56.756  
00:21:56.756    00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:56.756    00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:56.756    00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:56.756   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:56.756    00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:56.756    00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:56.756    00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:56.756    00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:56.756   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:56.756  {
00:21:56.756  "cntlid": 129,
00:21:56.756  "qid": 0,
00:21:56.756  "state": "enabled",
00:21:56.756  "thread": "nvmf_tgt_poll_group_000",
00:21:56.756  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:56.756  "listen_address": {
00:21:56.756  "trtype": "TCP",
00:21:56.756  "adrfam": "IPv4",
00:21:56.756  "traddr": "10.0.0.2",
00:21:56.756  "trsvcid": "4420"
00:21:56.756  },
00:21:56.756  "peer_address": {
00:21:56.756  "trtype": "TCP",
00:21:56.756  "adrfam": "IPv4",
00:21:56.756  "traddr": "10.0.0.1",
00:21:56.756  "trsvcid": "51060"
00:21:56.756  },
00:21:56.756  "auth": {
00:21:56.756  "state": "completed",
00:21:56.756  "digest": "sha512",
00:21:56.756  "dhgroup": "ffdhe6144"
00:21:56.756  }
00:21:56.756  }
00:21:56.756  ]'
00:21:56.756    00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:57.015   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:57.015    00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:57.015   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:21:57.015    00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:57.015   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:57.015   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:57.015   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:57.273   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:57.273   00:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:57.840  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:57.840   00:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:21:58.407  
00:21:58.407    00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:21:58.407    00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:21:58.407    00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:21:58.407   00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:21:58.407    00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:21:58.407    00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:58.407    00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:58.407    00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:58.407   00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:21:58.407  {
00:21:58.407  "cntlid": 131,
00:21:58.407  "qid": 0,
00:21:58.407  "state": "enabled",
00:21:58.407  "thread": "nvmf_tgt_poll_group_000",
00:21:58.407  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:21:58.407  "listen_address": {
00:21:58.407  "trtype": "TCP",
00:21:58.407  "adrfam": "IPv4",
00:21:58.407  "traddr": "10.0.0.2",
00:21:58.407  "trsvcid": "4420"
00:21:58.407  },
00:21:58.407  "peer_address": {
00:21:58.407  "trtype": "TCP",
00:21:58.407  "adrfam": "IPv4",
00:21:58.407  "traddr": "10.0.0.1",
00:21:58.407  "trsvcid": "51090"
00:21:58.407  },
00:21:58.407  "auth": {
00:21:58.407  "state": "completed",
00:21:58.407  "digest": "sha512",
00:21:58.407  "dhgroup": "ffdhe6144"
00:21:58.407  }
00:21:58.407  }
00:21:58.407  ]'
00:21:58.407    00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:21:58.666   00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:21:58.666    00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:21:58.666   00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:21:58.666    00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:21:58.666   00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:21:58.666   00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:21:58.666   00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:21:58.924   00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:58.924   00:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:21:59.493  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:21:59.493   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:00.061  
00:22:00.062    00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:00.062    00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:00.062    00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:00.062   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:00.062    00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:00.062    00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:00.062    00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:00.062    00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:00.062   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:00.062  {
00:22:00.062  "cntlid": 133,
00:22:00.062  "qid": 0,
00:22:00.062  "state": "enabled",
00:22:00.062  "thread": "nvmf_tgt_poll_group_000",
00:22:00.062  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:00.062  "listen_address": {
00:22:00.062  "trtype": "TCP",
00:22:00.062  "adrfam": "IPv4",
00:22:00.062  "traddr": "10.0.0.2",
00:22:00.062  "trsvcid": "4420"
00:22:00.062  },
00:22:00.062  "peer_address": {
00:22:00.062  "trtype": "TCP",
00:22:00.062  "adrfam": "IPv4",
00:22:00.062  "traddr": "10.0.0.1",
00:22:00.062  "trsvcid": "51112"
00:22:00.062  },
00:22:00.062  "auth": {
00:22:00.062  "state": "completed",
00:22:00.062  "digest": "sha512",
00:22:00.062  "dhgroup": "ffdhe6144"
00:22:00.062  }
00:22:00.062  }
00:22:00.062  ]'
00:22:00.062    00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:00.062   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:00.062    00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:00.325   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:22:00.325    00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:00.325   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:00.326   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:00.326   00:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:00.589   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:22:00.590   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:01.167  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:01.167   00:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:01.736  
00:22:01.736    00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:01.736    00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:01.736    00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:01.736   00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:01.736    00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:01.736    00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:01.736    00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:01.736    00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:01.736   00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:01.736  {
00:22:01.736  "cntlid": 135,
00:22:01.736  "qid": 0,
00:22:01.736  "state": "enabled",
00:22:01.736  "thread": "nvmf_tgt_poll_group_000",
00:22:01.736  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:01.736  "listen_address": {
00:22:01.736  "trtype": "TCP",
00:22:01.736  "adrfam": "IPv4",
00:22:01.736  "traddr": "10.0.0.2",
00:22:01.736  "trsvcid": "4420"
00:22:01.736  },
00:22:01.736  "peer_address": {
00:22:01.736  "trtype": "TCP",
00:22:01.736  "adrfam": "IPv4",
00:22:01.736  "traddr": "10.0.0.1",
00:22:01.736  "trsvcid": "56376"
00:22:01.736  },
00:22:01.736  "auth": {
00:22:01.736  "state": "completed",
00:22:01.736  "digest": "sha512",
00:22:01.736  "dhgroup": "ffdhe6144"
00:22:01.736  }
00:22:01.736  }
00:22:01.736  ]'
00:22:01.736    00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:01.736   00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:01.736    00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:01.995   00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]]
00:22:01.995    00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:01.995   00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:01.995   00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:01.995   00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:02.255   00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:22:02.255   00:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:02.823  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}"
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:02.823   00:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:03.391  
00:22:03.391    00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:03.391    00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:03.391    00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:03.650   00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:03.650    00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:03.650    00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:03.650    00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:03.650    00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:03.650   00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:03.650  {
00:22:03.650  "cntlid": 137,
00:22:03.650  "qid": 0,
00:22:03.650  "state": "enabled",
00:22:03.650  "thread": "nvmf_tgt_poll_group_000",
00:22:03.650  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:03.650  "listen_address": {
00:22:03.650  "trtype": "TCP",
00:22:03.650  "adrfam": "IPv4",
00:22:03.650  "traddr": "10.0.0.2",
00:22:03.650  "trsvcid": "4420"
00:22:03.650  },
00:22:03.650  "peer_address": {
00:22:03.650  "trtype": "TCP",
00:22:03.650  "adrfam": "IPv4",
00:22:03.650  "traddr": "10.0.0.1",
00:22:03.650  "trsvcid": "56390"
00:22:03.650  },
00:22:03.650  "auth": {
00:22:03.650  "state": "completed",
00:22:03.650  "digest": "sha512",
00:22:03.650  "dhgroup": "ffdhe8192"
00:22:03.650  }
00:22:03.650  }
00:22:03.650  ]'
00:22:03.650    00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:03.650   00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:03.650    00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:03.650   00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:22:03.650    00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:03.650   00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:03.650   00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:03.650   00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:03.912   00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:22:03.912   00:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:22:04.510   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:04.510  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:04.510   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:04.510   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:04.510   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:04.510   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:04.510   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:04.510   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:22:04.510   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:22:04.769   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1
00:22:04.769   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:04.769   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:04.769   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:22:04.769   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1
00:22:04.769   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:04.769   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:04.769   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:04.769   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:04.769   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:04.769   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:04.769   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:04.769   00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:05.337  
00:22:05.337    00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:05.337    00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:05.337    00:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:05.337   00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:05.337    00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:05.337    00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:05.337    00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:05.337    00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:05.337   00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:05.337  {
00:22:05.337  "cntlid": 139,
00:22:05.337  "qid": 0,
00:22:05.337  "state": "enabled",
00:22:05.337  "thread": "nvmf_tgt_poll_group_000",
00:22:05.337  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:05.337  "listen_address": {
00:22:05.337  "trtype": "TCP",
00:22:05.337  "adrfam": "IPv4",
00:22:05.337  "traddr": "10.0.0.2",
00:22:05.337  "trsvcid": "4420"
00:22:05.337  },
00:22:05.337  "peer_address": {
00:22:05.337  "trtype": "TCP",
00:22:05.337  "adrfam": "IPv4",
00:22:05.337  "traddr": "10.0.0.1",
00:22:05.337  "trsvcid": "56410"
00:22:05.337  },
00:22:05.337  "auth": {
00:22:05.337  "state": "completed",
00:22:05.337  "digest": "sha512",
00:22:05.337  "dhgroup": "ffdhe8192"
00:22:05.337  }
00:22:05.337  }
00:22:05.337  ]'
00:22:05.337    00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:05.337   00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:05.337    00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:05.596   00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:22:05.596    00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:05.596   00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:05.596   00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:05.596   00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:05.856   00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:22:05.856   00:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: --dhchap-ctrl-secret DHHC-1:02:YzVkYWMzZDhjNzdiNTQ3OTQ5YzhjNzEyMTM3Nzc4NzM4ODM2ZjFhZWNjZjA3ZjRkhy0zKg==:
00:22:06.423   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:06.423  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:06.423   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:06.423   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:06.423   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:06.423   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:06.423   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:06.423   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:22:06.423   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:22:06.424   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2
00:22:06.424   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:06.424   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:06.424   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:22:06.424   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2
00:22:06.424   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:06.424   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:06.424   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:06.424   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:06.424   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:06.424   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:06.424   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:06.424   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:22:06.991  
00:22:06.991    00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:06.991    00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:06.991    00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:07.249   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:07.249    00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:07.249    00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:07.249    00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:07.250    00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:07.250   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:07.250  {
00:22:07.250  "cntlid": 141,
00:22:07.250  "qid": 0,
00:22:07.250  "state": "enabled",
00:22:07.250  "thread": "nvmf_tgt_poll_group_000",
00:22:07.250  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:07.250  "listen_address": {
00:22:07.250  "trtype": "TCP",
00:22:07.250  "adrfam": "IPv4",
00:22:07.250  "traddr": "10.0.0.2",
00:22:07.250  "trsvcid": "4420"
00:22:07.250  },
00:22:07.250  "peer_address": {
00:22:07.250  "trtype": "TCP",
00:22:07.250  "adrfam": "IPv4",
00:22:07.250  "traddr": "10.0.0.1",
00:22:07.250  "trsvcid": "56426"
00:22:07.250  },
00:22:07.250  "auth": {
00:22:07.250  "state": "completed",
00:22:07.250  "digest": "sha512",
00:22:07.250  "dhgroup": "ffdhe8192"
00:22:07.250  }
00:22:07.250  }
00:22:07.250  ]'
00:22:07.250    00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:07.250   00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:07.250    00:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:07.250   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:22:07.250    00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:07.250   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:07.250   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:07.250   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:07.508   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:22:07.508   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:01:Y2I3NzY3ZDJiN2NlMTZhZjliYTg3MmUzYTFiNjFmNWP9hVu2:
00:22:08.075   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:08.075  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:08.075   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:08.075   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:08.075   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:08.075   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:08.075   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}"
00:22:08.075   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:22:08.075   00:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:22:08.336   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3
00:22:08.336   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:08.336   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:08.336   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:22:08.336   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:22:08.336   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:08.336   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:22:08.336   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:08.336   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:08.336   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:08.336   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:08.336   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:08.336   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:08.904  
00:22:08.904    00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:08.904    00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:08.904    00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:08.904   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:08.904    00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:08.904    00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:08.904    00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:08.904    00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:08.904   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:08.904  {
00:22:08.904  "cntlid": 143,
00:22:08.904  "qid": 0,
00:22:08.904  "state": "enabled",
00:22:08.904  "thread": "nvmf_tgt_poll_group_000",
00:22:08.904  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:08.904  "listen_address": {
00:22:08.904  "trtype": "TCP",
00:22:08.904  "adrfam": "IPv4",
00:22:08.904  "traddr": "10.0.0.2",
00:22:08.904  "trsvcid": "4420"
00:22:08.904  },
00:22:08.904  "peer_address": {
00:22:08.904  "trtype": "TCP",
00:22:08.904  "adrfam": "IPv4",
00:22:08.904  "traddr": "10.0.0.1",
00:22:08.904  "trsvcid": "56458"
00:22:08.904  },
00:22:08.904  "auth": {
00:22:08.904  "state": "completed",
00:22:08.905  "digest": "sha512",
00:22:08.905  "dhgroup": "ffdhe8192"
00:22:08.905  }
00:22:08.905  }
00:22:08.905  ]'
00:22:08.905    00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:08.905   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:08.905    00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:09.163   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:22:09.163    00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:09.163   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:09.163   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:09.163   00:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:09.420   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:22:09.420   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:09.987  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:09.987    00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=,
00:22:09.987    00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512
00:22:09.987    00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=,
00:22:09.987    00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:09.987   00:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:22:10.556  
00:22:10.556    00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:10.556    00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:10.556    00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:10.814   00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:10.814    00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:10.814    00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:10.815    00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:10.815    00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:10.815   00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:10.815  {
00:22:10.815  "cntlid": 145,
00:22:10.815  "qid": 0,
00:22:10.815  "state": "enabled",
00:22:10.815  "thread": "nvmf_tgt_poll_group_000",
00:22:10.815  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:10.815  "listen_address": {
00:22:10.815  "trtype": "TCP",
00:22:10.815  "adrfam": "IPv4",
00:22:10.815  "traddr": "10.0.0.2",
00:22:10.815  "trsvcid": "4420"
00:22:10.815  },
00:22:10.815  "peer_address": {
00:22:10.815  "trtype": "TCP",
00:22:10.815  "adrfam": "IPv4",
00:22:10.815  "traddr": "10.0.0.1",
00:22:10.815  "trsvcid": "56494"
00:22:10.815  },
00:22:10.815  "auth": {
00:22:10.815  "state": "completed",
00:22:10.815  "digest": "sha512",
00:22:10.815  "dhgroup": "ffdhe8192"
00:22:10.815  }
00:22:10.815  }
00:22:10.815  ]'
00:22:10.815    00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:10.815   00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:10.815    00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:10.815   00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:22:10.815    00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:10.815   00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:10.815   00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:10.815   00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:11.073   00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:22:11.073   00:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YTY3MTc3YzlmNGUxYjlhNmM1ZDhmNWM2MjAwOWEwZDk1ZDA0ZTE2YThlZDI2MjYwCiVCEQ==: --dhchap-ctrl-secret DHHC-1:03:MjZhZTUzM2IyYTdjNTE1ZWE2NWY4YjAyZGI3MmVhNWM3NGNkNjk0ZDg4OTJkYmQ0ZGEzYzg3MTVkYWI3ODJmM4xINEk=:
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:11.640  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:11.640    00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2
00:22:11.640   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2
00:22:12.232  request:
00:22:12.232  {
00:22:12.232    "name": "nvme0",
00:22:12.232    "trtype": "tcp",
00:22:12.232    "traddr": "10.0.0.2",
00:22:12.232    "adrfam": "ipv4",
00:22:12.232    "trsvcid": "4420",
00:22:12.232    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:22:12.232    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:12.232    "prchk_reftag": false,
00:22:12.232    "prchk_guard": false,
00:22:12.232    "hdgst": false,
00:22:12.232    "ddgst": false,
00:22:12.232    "dhchap_key": "key2",
00:22:12.232    "allow_unrecognized_csi": false,
00:22:12.232    "method": "bdev_nvme_attach_controller",
00:22:12.232    "req_id": 1
00:22:12.232  }
00:22:12.232  Got JSON-RPC error response
00:22:12.232  response:
00:22:12.232  {
00:22:12.232    "code": -5,
00:22:12.232    "message": "Input/output error"
00:22:12.232  }
00:22:12.232   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:22:12.232   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:12.232   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:12.232   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:12.232   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:12.232   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:12.232   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:12.232   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:12.232   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:12.232   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:12.232   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:12.232   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:12.233   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:22:12.233   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:22:12.233   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:22:12.233   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:22:12.233   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:12.233    00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:22:12.233   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:12.233   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:22:12.233   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:22:12.233   00:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:22:12.597  request:
00:22:12.597  {
00:22:12.597    "name": "nvme0",
00:22:12.597    "trtype": "tcp",
00:22:12.597    "traddr": "10.0.0.2",
00:22:12.597    "adrfam": "ipv4",
00:22:12.597    "trsvcid": "4420",
00:22:12.597    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:22:12.597    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:12.597    "prchk_reftag": false,
00:22:12.597    "prchk_guard": false,
00:22:12.597    "hdgst": false,
00:22:12.597    "ddgst": false,
00:22:12.597    "dhchap_key": "key1",
00:22:12.597    "dhchap_ctrlr_key": "ckey2",
00:22:12.597    "allow_unrecognized_csi": false,
00:22:12.597    "method": "bdev_nvme_attach_controller",
00:22:12.597    "req_id": 1
00:22:12.597  }
00:22:12.597  Got JSON-RPC error response
00:22:12.597  response:
00:22:12.597  {
00:22:12.597    "code": -5,
00:22:12.597    "message": "Input/output error"
00:22:12.597  }
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:12.597    00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:12.597   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:22:13.193  request:
00:22:13.193  {
00:22:13.193    "name": "nvme0",
00:22:13.193    "trtype": "tcp",
00:22:13.193    "traddr": "10.0.0.2",
00:22:13.193    "adrfam": "ipv4",
00:22:13.193    "trsvcid": "4420",
00:22:13.193    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:22:13.193    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:13.193    "prchk_reftag": false,
00:22:13.193    "prchk_guard": false,
00:22:13.193    "hdgst": false,
00:22:13.193    "ddgst": false,
00:22:13.193    "dhchap_key": "key1",
00:22:13.193    "dhchap_ctrlr_key": "ckey1",
00:22:13.193    "allow_unrecognized_csi": false,
00:22:13.193    "method": "bdev_nvme_attach_controller",
00:22:13.193    "req_id": 1
00:22:13.193  }
00:22:13.193  Got JSON-RPC error response
00:22:13.193  response:
00:22:13.193  {
00:22:13.193    "code": -5,
00:22:13.193    "message": "Input/output error"
00:22:13.193  }
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3071531
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3071531 ']'
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3071531
00:22:13.193    00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:13.193    00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3071531
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3071531'
00:22:13.193  killing process with pid 3071531
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3071531
00:22:13.193   00:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3071531
00:22:13.193   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth
00:22:13.193   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:13.193   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:13.193   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.193   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=3093245
00:22:13.193   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth
00:22:13.193   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 3093245
00:22:13.193   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3093245 ']'
00:22:13.193   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:13.193   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:13.193   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:13.193   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:13.193   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.452   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:13.452   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:22:13.452   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:13.452   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:13.452   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.452   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:13.452   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT
00:22:13.452   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3093245
00:22:13.452   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3093245 ']'
00:22:13.452   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:13.452   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:13.453   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:13.453  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:13.453   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:13.453   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.710   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:13.710   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0
00:22:13.710   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd
00:22:13.710   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:13.710   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.710  null0
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.i8m
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.KlG ]]
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KlG
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.hBX
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.z0d ]]
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z0d
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.80f
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.mDM ]]
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.mDM
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}"
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.nod
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]]
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"})
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:13.969   00:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:14.537  nvme0n1
00:22:14.815    00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers
00:22:14.815    00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name'
00:22:14.815    00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:14.815   00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:14.815    00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0
00:22:14.815    00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:14.815    00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:14.815    00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:14.815   00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[
00:22:14.815  {
00:22:14.815  "cntlid": 1,
00:22:14.815  "qid": 0,
00:22:14.815  "state": "enabled",
00:22:14.815  "thread": "nvmf_tgt_poll_group_000",
00:22:14.815  "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:14.815  "listen_address": {
00:22:14.815  "trtype": "TCP",
00:22:14.815  "adrfam": "IPv4",
00:22:14.815  "traddr": "10.0.0.2",
00:22:14.815  "trsvcid": "4420"
00:22:14.815  },
00:22:14.815  "peer_address": {
00:22:14.815  "trtype": "TCP",
00:22:14.815  "adrfam": "IPv4",
00:22:14.815  "traddr": "10.0.0.1",
00:22:14.815  "trsvcid": "59904"
00:22:14.815  },
00:22:14.815  "auth": {
00:22:14.815  "state": "completed",
00:22:14.815  "digest": "sha512",
00:22:14.815  "dhgroup": "ffdhe8192"
00:22:14.815  }
00:22:14.815  }
00:22:14.815  ]'
00:22:14.815    00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest'
00:22:15.074   00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]]
00:22:15.074    00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup'
00:22:15.074   00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]]
00:22:15.074    00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state'
00:22:15.074   00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]]
00:22:15.074   00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:15.074   00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:15.332   00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:22:15.332   00:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:22:15.899   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:15.899  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:15.899   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:15.899   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:15.899   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:15.899   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:15.899   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3
00:22:15.899   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:15.899   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:15.899   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:15.899   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256
00:22:15.899   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:16.159    00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:16.159  request:
00:22:16.159  {
00:22:16.159    "name": "nvme0",
00:22:16.159    "trtype": "tcp",
00:22:16.159    "traddr": "10.0.0.2",
00:22:16.159    "adrfam": "ipv4",
00:22:16.159    "trsvcid": "4420",
00:22:16.159    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:22:16.159    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:16.159    "prchk_reftag": false,
00:22:16.159    "prchk_guard": false,
00:22:16.159    "hdgst": false,
00:22:16.159    "ddgst": false,
00:22:16.159    "dhchap_key": "key3",
00:22:16.159    "allow_unrecognized_csi": false,
00:22:16.159    "method": "bdev_nvme_attach_controller",
00:22:16.159    "req_id": 1
00:22:16.159  }
00:22:16.159  Got JSON-RPC error response
00:22:16.159  response:
00:22:16.159  {
00:22:16.159    "code": -5,
00:22:16.159    "message": "Input/output error"
00:22:16.159  }
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:16.159    00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=,
00:22:16.159    00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512
00:22:16.159   00:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512
00:22:16.418   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3
00:22:16.418   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:22:16.418   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3
00:22:16.418   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:22:16.418   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:16.418    00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:22:16.418   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:16.418   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3
00:22:16.418   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:16.418   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3
00:22:16.677  request:
00:22:16.677  {
00:22:16.677    "name": "nvme0",
00:22:16.677    "trtype": "tcp",
00:22:16.677    "traddr": "10.0.0.2",
00:22:16.677    "adrfam": "ipv4",
00:22:16.677    "trsvcid": "4420",
00:22:16.677    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:22:16.677    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:16.677    "prchk_reftag": false,
00:22:16.677    "prchk_guard": false,
00:22:16.677    "hdgst": false,
00:22:16.677    "ddgst": false,
00:22:16.677    "dhchap_key": "key3",
00:22:16.677    "allow_unrecognized_csi": false,
00:22:16.677    "method": "bdev_nvme_attach_controller",
00:22:16.677    "req_id": 1
00:22:16.677  }
00:22:16.677  Got JSON-RPC error response
00:22:16.677  response:
00:22:16.677  {
00:22:16.677    "code": -5,
00:22:16.677    "message": "Input/output error"
00:22:16.677  }
00:22:16.677   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:22:16.677   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:16.677   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:16.677   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:16.677    00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=,
00:22:16.677    00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512
00:22:16.677    00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=,
00:22:16.677    00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:22:16.677   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:22:16.677   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:16.936    00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:22:16.936   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1
00:22:17.195  request:
00:22:17.195  {
00:22:17.195    "name": "nvme0",
00:22:17.195    "trtype": "tcp",
00:22:17.195    "traddr": "10.0.0.2",
00:22:17.195    "adrfam": "ipv4",
00:22:17.195    "trsvcid": "4420",
00:22:17.195    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:22:17.195    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:17.195    "prchk_reftag": false,
00:22:17.195    "prchk_guard": false,
00:22:17.195    "hdgst": false,
00:22:17.195    "ddgst": false,
00:22:17.195    "dhchap_key": "key0",
00:22:17.195    "dhchap_ctrlr_key": "key1",
00:22:17.195    "allow_unrecognized_csi": false,
00:22:17.195    "method": "bdev_nvme_attach_controller",
00:22:17.195    "req_id": 1
00:22:17.195  }
00:22:17.195  Got JSON-RPC error response
00:22:17.195  response:
00:22:17.195  {
00:22:17.195    "code": -5,
00:22:17.195    "message": "Input/output error"
00:22:17.195  }
00:22:17.196   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:22:17.196   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:17.196   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:17.196   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:17.196   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0
00:22:17.196   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0
00:22:17.196   00:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0
00:22:17.454  nvme0n1
00:22:17.454    00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers
00:22:17.454    00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:17.454    00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name'
00:22:17.713   00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:17.713   00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:17.713   00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:17.971   00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1
00:22:17.971   00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:17.971   00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:17.971   00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:17.971   00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1
00:22:17.971   00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:22:17.971   00:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:22:18.539  nvme0n1
00:22:18.797    00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers
00:22:18.797    00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name'
00:22:18.797    00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:18.797   00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:18.797   00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3
00:22:18.797   00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:18.797   00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:18.797   00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:18.797    00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers
00:22:18.797    00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name'
00:22:18.797    00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:19.056   00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:19.056   00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:22:19.056   00:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: --dhchap-ctrl-secret DHHC-1:03:ZTZlMDlhZTM0Y2E2NTRiZTAzOTZmMjAzOTBjZDA3OTA1MWY0NjZjYTViMmQ5ZDBkMDgxMGVkYTE1NWQ0NTllM+Otqg4=:
00:22:19.636    00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr
00:22:19.636    00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev
00:22:19.636    00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme*
00:22:19.636    00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]]
00:22:19.636    00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0
00:22:19.636    00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break
00:22:19.636   00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0
00:22:19.636   00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:19.636   00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:19.895   00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1
00:22:19.895   00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:22:19.895   00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1
00:22:19.895   00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect
00:22:19.895   00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:19.895    00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect
00:22:19.895   00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:19.895   00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1
00:22:19.895   00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:22:19.895   00:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1
00:22:20.154  request:
00:22:20.154  {
00:22:20.154    "name": "nvme0",
00:22:20.154    "trtype": "tcp",
00:22:20.154    "traddr": "10.0.0.2",
00:22:20.154    "adrfam": "ipv4",
00:22:20.154    "trsvcid": "4420",
00:22:20.154    "subnqn": "nqn.2024-03.io.spdk:cnode0",
00:22:20.154    "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562",
00:22:20.154    "prchk_reftag": false,
00:22:20.154    "prchk_guard": false,
00:22:20.154    "hdgst": false,
00:22:20.154    "ddgst": false,
00:22:20.155    "dhchap_key": "key1",
00:22:20.155    "allow_unrecognized_csi": false,
00:22:20.155    "method": "bdev_nvme_attach_controller",
00:22:20.155    "req_id": 1
00:22:20.155  }
00:22:20.155  Got JSON-RPC error response
00:22:20.155  response:
00:22:20.155  {
00:22:20.155    "code": -5,
00:22:20.155    "message": "Input/output error"
00:22:20.155  }
00:22:20.414   00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:22:20.414   00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:20.414   00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:20.414   00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:20.414   00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:22:20.414   00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:22:20.414   00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:22:20.981  nvme0n1
00:22:20.981    00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers
00:22:20.981    00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name'
00:22:20.981    00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:21.239   00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:21.239   00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:21.239   00:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:21.497   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:21.498   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:21.498   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:21.498   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:21.498   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0
00:22:21.498   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0
00:22:21.498   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0
00:22:21.784  nvme0n1
00:22:21.784    00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers
00:22:21.784    00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name'
00:22:21.784    00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:21.784   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:21.784   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0
00:22:21.784   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: '' 2s
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos:
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos: ]]
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTBkMTM1Nzc2OTU1NjU0MGY3NDc1ZDNhMmJkZTI1NjlZkWos:
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]]
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]]
00:22:22.043   00:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: 2s
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==:
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]]
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==: ]]
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTg4NzRlZWJlMmRjZTRjM2Y0MzAxY2U2ZjAxOGNiNWNkNzg1MDg1NWVkMmYwZTMz7J972g==:
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]]
00:22:24.577   00:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0
00:22:26.489  NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s)
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:22:26.489   00:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:22:27.062  nvme0n1
00:22:27.062   00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3
00:22:27.062   00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:27.062   00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:27.062   00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:27.062   00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:22:27.062   00:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3
00:22:27.320    00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers
00:22:27.320    00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name'
00:22:27.320    00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:27.578   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:27.578   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:27.578   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:27.578   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:27.578   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:27.578   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0
00:22:27.578   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0
00:22:27.836    00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name'
00:22:27.836    00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers
00:22:27.836    00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:28.095   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:22:28.095   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3
00:22:28.095   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:28.095   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:28.095   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:28.095   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:22:28.095   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:22:28.095   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:22:28.095   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc
00:22:28.095   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:28.095    00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc
00:22:28.095   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:28.095   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:22:28.095   00:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3
00:22:28.669  request:
00:22:28.669  {
00:22:28.669    "name": "nvme0",
00:22:28.669    "dhchap_key": "key1",
00:22:28.669    "dhchap_ctrlr_key": "key3",
00:22:28.669    "method": "bdev_nvme_set_keys",
00:22:28.669    "req_id": 1
00:22:28.669  }
00:22:28.669  Got JSON-RPC error response
00:22:28.669  response:
00:22:28.669  {
00:22:28.669    "code": -13,
00:22:28.669    "message": "Permission denied"
00:22:28.669  }
00:22:28.669   00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:22:28.669   00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:28.669   00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:28.669   00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:28.669    00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers
00:22:28.669    00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length
00:22:28.669    00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:28.669   00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 ))
00:22:28.669   00:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s
00:22:30.048    00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers
00:22:30.048    00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length
00:22:30.048    00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:30.048   00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 ))
00:22:30.048   00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1
00:22:30.048   00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:30.048   00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:30.048   00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:30.048   00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:22:30.048   00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:22:30.048   00:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:22:30.615  nvme0n1
00:22:30.615   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3
00:22:30.615   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:30.615   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:30.615   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:30.615   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:22:30.615   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0
00:22:30.615   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:22:30.615   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc
00:22:30.615   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:30.615    00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc
00:22:30.615   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:22:30.615   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:22:30.615   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0
00:22:31.204  request:
00:22:31.204  {
00:22:31.204    "name": "nvme0",
00:22:31.204    "dhchap_key": "key2",
00:22:31.204    "dhchap_ctrlr_key": "key0",
00:22:31.204    "method": "bdev_nvme_set_keys",
00:22:31.204    "req_id": 1
00:22:31.204  }
00:22:31.204  Got JSON-RPC error response
00:22:31.204  response:
00:22:31.204  {
00:22:31.204    "code": -13,
00:22:31.204    "message": "Permission denied"
00:22:31.204  }
00:22:31.204   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1
00:22:31.204   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:22:31.204   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:22:31.204   00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:22:31.204    00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers
00:22:31.204    00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length
00:22:31.204    00:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:31.462   00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 ))
00:22:31.462   00:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s
00:22:32.398    00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers
00:22:32.398    00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length
00:22:32.398    00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers
00:22:32.657   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 ))
00:22:32.657   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT
00:22:32.657   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup
00:22:32.657   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3071550
00:22:32.657   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3071550 ']'
00:22:32.657   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3071550
00:22:32.657    00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname
00:22:32.657   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:32.657    00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3071550
00:22:32.657   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:22:32.657   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:22:32.657   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3071550'
00:22:32.657  killing process with pid 3071550
00:22:32.657   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3071550
00:22:32.657   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3071550
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:22:32.915  rmmod nvme_tcp
00:22:32.915  rmmod nvme_fabrics
00:22:32.915  rmmod nvme_keyring
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 3093245 ']'
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 3093245
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3093245 ']'
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3093245
00:22:32.915    00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname
00:22:32.915   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:32.915    00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3093245
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3093245'
00:22:33.174  killing process with pid 3093245
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3093245
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3093245
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:22:33.174   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns
00:22:33.175   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:33.175   00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:33.175    00:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:35.711   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:22:35.711   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.i8m /tmp/spdk.key-sha256.hBX /tmp/spdk.key-sha384.80f /tmp/spdk.key-sha512.nod /tmp/spdk.key-sha512.KlG /tmp/spdk.key-sha384.z0d /tmp/spdk.key-sha256.mDM '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log
00:22:35.711  
00:22:35.711  real	2m31.752s
00:22:35.711  user	5m49.796s
00:22:35.711  sys	0m24.289s
00:22:35.711   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:22:35.711   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x
00:22:35.711  ************************************
00:22:35.711  END TEST nvmf_auth_target
00:22:35.711  ************************************
00:22:35.711   00:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']'
00:22:35.711   00:03:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages
00:22:35.711   00:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:22:35.711   00:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:22:35.711   00:03:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:22:35.711  ************************************
00:22:35.711  START TEST nvmf_bdevio_no_huge
00:22:35.711  ************************************
00:22:35.712   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages
00:22:35.712  * Looking for test storage...
00:22:35.712  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-:
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-:
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<'
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 ))
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:22:35.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:35.712  		--rc genhtml_branch_coverage=1
00:22:35.712  		--rc genhtml_function_coverage=1
00:22:35.712  		--rc genhtml_legend=1
00:22:35.712  		--rc geninfo_all_blocks=1
00:22:35.712  		--rc geninfo_unexecuted_blocks=1
00:22:35.712  		
00:22:35.712  		'
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:22:35.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:35.712  		--rc genhtml_branch_coverage=1
00:22:35.712  		--rc genhtml_function_coverage=1
00:22:35.712  		--rc genhtml_legend=1
00:22:35.712  		--rc geninfo_all_blocks=1
00:22:35.712  		--rc geninfo_unexecuted_blocks=1
00:22:35.712  		
00:22:35.712  		'
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:22:35.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:35.712  		--rc genhtml_branch_coverage=1
00:22:35.712  		--rc genhtml_function_coverage=1
00:22:35.712  		--rc genhtml_legend=1
00:22:35.712  		--rc geninfo_all_blocks=1
00:22:35.712  		--rc geninfo_unexecuted_blocks=1
00:22:35.712  		
00:22:35.712  		'
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:22:35.712  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:35.712  		--rc genhtml_branch_coverage=1
00:22:35.712  		--rc genhtml_function_coverage=1
00:22:35.712  		--rc genhtml_legend=1
00:22:35.712  		--rc geninfo_all_blocks=1
00:22:35.712  		--rc geninfo_unexecuted_blocks=1
00:22:35.712  		
00:22:35.712  		'
00:22:35.712   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:22:35.712    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:22:35.712     00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:22:35.712      00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:35.712      00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:35.713      00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:35.713      00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH
00:22:35.713      00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:35.713    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0
00:22:35.713    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:22:35.713    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:22:35.713    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:22:35.713    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:22:35.713    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:22:35.713    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:22:35.713  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:22:35.713    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:22:35.713    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:22:35.713    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:35.713    00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable
00:22:35.713   00:03:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=()
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=()
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=()
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=()
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=()
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=()
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=()
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:22:42.284   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:22:42.285  Found 0000:af:00.0 (0x8086 - 0x159b)
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:22:42.285  Found 0000:af:00.1 (0x8086 - 0x159b)
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:22:42.285  Found net devices under 0000:af:00.0: cvl_0_0
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:22:42.285  Found net devices under 0000:af:00.1: cvl_0_1
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:22:42.285   00:03:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:22:42.285  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:22:42.285  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms
00:22:42.285  
00:22:42.285  --- 10.0.0.2 ping statistics ---
00:22:42.285  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:42.285  rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:22:42.285  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:22:42.285  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms
00:22:42.285  
00:22:42.285  --- 10.0.0.1 ping statistics ---
00:22:42.285  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:42.285  rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78
00:22:42.285   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:42.286   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:42.286   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:22:42.286   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3100098
00:22:42.286   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3100098
00:22:42.286   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78
00:22:42.286   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3100098 ']'
00:22:42.286   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:42.286   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:42.286   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:42.286  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:42.286   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:42.286   00:03:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:22:42.286  [2024-12-10 00:03:57.285996] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:22:42.286  [2024-12-10 00:03:57.286041] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ]
00:22:42.286  [2024-12-10 00:03:57.369217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:22:42.286  [2024-12-10 00:03:57.415897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:42.286  [2024-12-10 00:03:57.415931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:42.286  [2024-12-10 00:03:57.415937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:22:42.286  [2024-12-10 00:03:57.415943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:22:42.286  [2024-12-10 00:03:57.415948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:42.286  [2024-12-10 00:03:57.417049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:22:42.286  [2024-12-10 00:03:57.417157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:22:42.286  [2024-12-10 00:03:57.417265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:22:42.286  [2024-12-10 00:03:57.417265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:22:42.286   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:42.286   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0
00:22:42.286   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:42.286   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:42.286   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:22:42.544   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:42.544   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:22:42.544   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:42.544   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:22:42.544  [2024-12-10 00:03:58.161861] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:42.544   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:42.544   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:22:42.544   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:42.544   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:22:42.544  Malloc0
00:22:42.544   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:42.544   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:22:42.544   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:42.544   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:22:42.544   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:42.545   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:22:42.545   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:42.545   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:22:42.545   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:42.545   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:22:42.545   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:42.545   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:22:42.545  [2024-12-10 00:03:58.198090] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:22:42.545   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:42.545   00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024
00:22:42.545    00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json
00:22:42.545    00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=()
00:22:42.545    00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config
00:22:42.545    00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:22:42.545    00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:22:42.545  {
00:22:42.545    "params": {
00:22:42.545      "name": "Nvme$subsystem",
00:22:42.545      "trtype": "$TEST_TRANSPORT",
00:22:42.545      "traddr": "$NVMF_FIRST_TARGET_IP",
00:22:42.545      "adrfam": "ipv4",
00:22:42.545      "trsvcid": "$NVMF_PORT",
00:22:42.545      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:22:42.545      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:22:42.545      "hdgst": ${hdgst:-false},
00:22:42.545      "ddgst": ${ddgst:-false}
00:22:42.545    },
00:22:42.545    "method": "bdev_nvme_attach_controller"
00:22:42.545  }
00:22:42.545  EOF
00:22:42.545  )")
00:22:42.545     00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat
00:22:42.545    00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq .
00:22:42.545     00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=,
00:22:42.545     00:03:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:22:42.545    "params": {
00:22:42.545      "name": "Nvme1",
00:22:42.545      "trtype": "tcp",
00:22:42.545      "traddr": "10.0.0.2",
00:22:42.545      "adrfam": "ipv4",
00:22:42.545      "trsvcid": "4420",
00:22:42.545      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:22:42.545      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:22:42.545      "hdgst": false,
00:22:42.545      "ddgst": false
00:22:42.545    },
00:22:42.545    "method": "bdev_nvme_attach_controller"
00:22:42.545  }'
00:22:42.545  [2024-12-10 00:03:58.247791] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:22:42.545  [2024-12-10 00:03:58.247837] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3100219 ]
00:22:42.545  [2024-12-10 00:03:58.328301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:22:42.545  [2024-12-10 00:03:58.375796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:22:42.545  [2024-12-10 00:03:58.375902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:22:42.545  [2024-12-10 00:03:58.375902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:22:42.805  I/O targets:
00:22:42.805    Nvme1n1: 131072 blocks of 512 bytes (64 MiB)
00:22:42.805  
00:22:42.805  
00:22:42.805       CUnit - A unit testing framework for C - Version 2.1-3
00:22:42.805       http://cunit.sourceforge.net/
00:22:42.805  
00:22:42.805  
00:22:42.805  Suite: bdevio tests on: Nvme1n1
00:22:42.805    Test: blockdev write read block ...passed
00:22:43.064    Test: blockdev write zeroes read block ...passed
00:22:43.064    Test: blockdev write zeroes read no split ...passed
00:22:43.064    Test: blockdev write zeroes read split ...passed
00:22:43.064    Test: blockdev write zeroes read split partial ...passed
00:22:43.064    Test: blockdev reset ...[2024-12-10 00:03:58.709209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:22:43.064  [2024-12-10 00:03:58.709279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f2d30 (9): Bad file descriptor
00:22:43.064  [2024-12-10 00:03:58.723948] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful.
00:22:43.064  passed
00:22:43.064    Test: blockdev write read 8 blocks ...passed
00:22:43.064    Test: blockdev write read size > 128k ...passed
00:22:43.064    Test: blockdev write read invalid size ...passed
00:22:43.064    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:22:43.064    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:22:43.064    Test: blockdev write read max offset ...passed
00:22:43.064    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:22:43.064    Test: blockdev writev readv 8 blocks ...passed
00:22:43.322    Test: blockdev writev readv 30 x 1block ...passed
00:22:43.322    Test: blockdev writev readv block ...passed
00:22:43.322    Test: blockdev writev readv size > 128k ...passed
00:22:43.322    Test: blockdev writev readv size > 128k in two iovs ...passed
00:22:43.322    Test: blockdev comparev and writev ...[2024-12-10 00:03:58.973924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:22:43.322  [2024-12-10 00:03:58.973953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:22:43.322  [2024-12-10 00:03:58.973967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:22:43.322  [2024-12-10 00:03:58.973975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:22:43.322  [2024-12-10 00:03:58.974229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:22:43.322  [2024-12-10 00:03:58.974240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:22:43.322  [2024-12-10 00:03:58.974251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:22:43.322  [2024-12-10 00:03:58.974258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:22:43.322  [2024-12-10 00:03:58.974481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:22:43.322  [2024-12-10 00:03:58.974491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:22:43.322  [2024-12-10 00:03:58.974502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:22:43.322  [2024-12-10 00:03:58.974513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:22:43.322  [2024-12-10 00:03:58.974755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:22:43.322  [2024-12-10 00:03:58.974765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:22:43.323  [2024-12-10 00:03:58.974776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:22:43.323  [2024-12-10 00:03:58.974783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:22:43.323  passed
00:22:43.323    Test: blockdev nvme passthru rw ...passed
00:22:43.323    Test: blockdev nvme passthru vendor specific ...[2024-12-10 00:03:59.056605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:22:43.323  [2024-12-10 00:03:59.056622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:22:43.323  [2024-12-10 00:03:59.056734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:22:43.323  [2024-12-10 00:03:59.056745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:22:43.323  [2024-12-10 00:03:59.056850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:22:43.323  [2024-12-10 00:03:59.056860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:22:43.323  [2024-12-10 00:03:59.056963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:22:43.323  [2024-12-10 00:03:59.056974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:22:43.323  passed
00:22:43.323    Test: blockdev nvme admin passthru ...passed
00:22:43.323    Test: blockdev copy ...passed
00:22:43.323  
00:22:43.323  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:22:43.323                suites      1      1    n/a      0        0
00:22:43.323                 tests     23     23     23      0        0
00:22:43.323               asserts    152    152    152      0      n/a
00:22:43.323  
00:22:43.323  Elapsed time =    1.064 seconds
00:22:43.581   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:22:43.581   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable
00:22:43.581   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:22:43.581   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:22:43.581   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT
00:22:43.581   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini
00:22:43.581   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup
00:22:43.581   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync
00:22:43.581   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:22:43.581   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e
00:22:43.581   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20}
00:22:43.581   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:22:43.581  rmmod nvme_tcp
00:22:43.581  rmmod nvme_fabrics
00:22:43.581  rmmod nvme_keyring
00:22:43.581   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:22:43.582   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e
00:22:43.582   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0
00:22:43.582   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3100098 ']'
00:22:43.582   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3100098
00:22:43.582   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3100098 ']'
00:22:43.582   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3100098
00:22:43.582    00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname
00:22:43.841   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:22:43.841    00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3100098
00:22:43.841   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3
00:22:43.841   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']'
00:22:43.841   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3100098'
00:22:43.841  killing process with pid 3100098
00:22:43.841   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3100098
00:22:43.841   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3100098
00:22:44.101   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:22:44.101   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:22:44.101   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:22:44.101   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr
00:22:44.101   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save
00:22:44.101   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:22:44.101   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore
00:22:44.101   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:22:44.101   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns
00:22:44.101   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:44.101   00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:44.101    00:03:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:46.004   00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:22:46.004  
00:22:46.004  real	0m10.755s
00:22:46.004  user	0m13.097s
00:22:46.004  sys	0m5.349s
00:22:46.004   00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable
00:22:46.004   00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x
00:22:46.004  ************************************
00:22:46.004  END TEST nvmf_bdevio_no_huge
00:22:46.004  ************************************
00:22:46.261   00:04:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp
00:22:46.261   00:04:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:22:46.261   00:04:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:22:46.261   00:04:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:22:46.261  ************************************
00:22:46.261  START TEST nvmf_tls
00:22:46.261  ************************************
00:22:46.261   00:04:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp
00:22:46.261  * Looking for test storage...
00:22:46.261  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:22:46.261     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version
00:22:46.261     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-:
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-:
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<'
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 ))
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:22:46.261     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1
00:22:46.261     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1
00:22:46.261     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:22:46.261     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1
00:22:46.261     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2
00:22:46.261     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2
00:22:46.261     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:22:46.261     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:22:46.261  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:46.261  		--rc genhtml_branch_coverage=1
00:22:46.261  		--rc genhtml_function_coverage=1
00:22:46.261  		--rc genhtml_legend=1
00:22:46.261  		--rc geninfo_all_blocks=1
00:22:46.261  		--rc geninfo_unexecuted_blocks=1
00:22:46.261  		
00:22:46.261  		'
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:22:46.261  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:46.261  		--rc genhtml_branch_coverage=1
00:22:46.261  		--rc genhtml_function_coverage=1
00:22:46.261  		--rc genhtml_legend=1
00:22:46.261  		--rc geninfo_all_blocks=1
00:22:46.261  		--rc geninfo_unexecuted_blocks=1
00:22:46.261  		
00:22:46.261  		'
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:22:46.261  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:46.261  		--rc genhtml_branch_coverage=1
00:22:46.261  		--rc genhtml_function_coverage=1
00:22:46.261  		--rc genhtml_legend=1
00:22:46.261  		--rc geninfo_all_blocks=1
00:22:46.261  		--rc geninfo_unexecuted_blocks=1
00:22:46.261  		
00:22:46.261  		'
00:22:46.261    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:22:46.261  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:22:46.261  		--rc genhtml_branch_coverage=1
00:22:46.261  		--rc genhtml_function_coverage=1
00:22:46.261  		--rc genhtml_legend=1
00:22:46.261  		--rc geninfo_all_blocks=1
00:22:46.262  		--rc geninfo_unexecuted_blocks=1
00:22:46.262  		
00:22:46.262  		'
00:22:46.262   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:22:46.262     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s
00:22:46.262    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:22:46.262    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:22:46.262    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:22:46.262    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:22:46.262    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:22:46.262    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:22:46.262    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:22:46.262    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:22:46.262    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:22:46.262     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:22:46.262    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:22:46.262    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:22:46.262    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:22:46.519    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:22:46.519    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:22:46.520    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:22:46.520    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:22:46.520     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob
00:22:46.520     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:22:46.520     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:22:46.520     00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:22:46.520      00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:46.520      00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:46.520      00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:46.520      00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH
00:22:46.520      00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:22:46.520    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0
00:22:46.520    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:22:46.520    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:22:46.520    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:22:46.520    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:22:46.520    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:22:46.520    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:22:46.520  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:22:46.520    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:22:46.520    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:22:46.520    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0
00:22:46.520   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:22:46.520   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit
00:22:46.520   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:22:46.520   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:22:46.520   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs
00:22:46.520   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no
00:22:46.520   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns
00:22:46.520   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:22:46.520   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:22:46.520    00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:22:46.520   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:22:46.520   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:22:46.520   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable
00:22:46.520   00:04:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:53.089   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:22:53.089   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=()
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=()
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=()
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=()
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=()
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=()
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=()
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:22:53.090  Found 0000:af:00.0 (0x8086 - 0x159b)
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:22:53.090  Found 0000:af:00.1 (0x8086 - 0x159b)
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:22:53.090  Found net devices under 0000:af:00.0: cvl_0_0
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:22:53.090  Found net devices under 0000:af:00.1: cvl_0_1
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:22:53.090   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:22:53.091   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:22:53.091   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:22:53.091   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:22:53.091   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:22:53.091   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:22:53.091   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:22:53.091   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:22:53.091   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:22:53.091   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:22:53.091   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:22:53.091   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:22:53.091   00:04:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:22:53.091  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:22:53.091  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms
00:22:53.091  
00:22:53.091  --- 10.0.0.2 ping statistics ---
00:22:53.091  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:53.091  rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:22:53.091  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:22:53.091  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms
00:22:53.091  
00:22:53.091  --- 10.0.0.1 ping statistics ---
00:22:53.091  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:22:53.091  rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3103917
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3103917
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3103917 ']'
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:22:53.091  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:53.091  [2024-12-10 00:04:08.106495] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:22:53.091  [2024-12-10 00:04:08.106540] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:22:53.091  [2024-12-10 00:04:08.186353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:22:53.091  [2024-12-10 00:04:08.225570] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:22:53.091  [2024-12-10 00:04:08.225607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:22:53.091  [2024-12-10 00:04:08.225614] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:22:53.091  [2024-12-10 00:04:08.225620] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:22:53.091  [2024-12-10 00:04:08.225625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:22:53.091  [2024-12-10 00:04:08.226103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']'
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl
00:22:53.091  true
00:22:53.091    00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version
00:22:53.091    00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]]
00:22:53.091   00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13
00:22:53.091    00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:22:53.091    00:04:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version
00:22:53.350   00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13
00:22:53.350   00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]]
00:22:53.350   00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7
00:22:53.351    00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:22:53.351    00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version
00:22:53.609   00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7
00:22:53.609   00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]]
00:22:53.609    00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:22:53.609    00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls
00:22:53.868   00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false
00:22:53.868   00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]]
00:22:53.868   00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls
00:22:54.127    00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:22:54.127    00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls
00:22:54.127   00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true
00:22:54.127   00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]]
00:22:54.127   00:04:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls
00:22:54.385    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl
00:22:54.385    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls
00:22:54.644   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false
00:22:54.644   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]]
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python -
00:22:54.644   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python -
00:22:54.644   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y:
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp
00:22:54.644   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.C6e8ioS86X
00:22:54.644    00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp
00:22:54.644   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.t8hUYPGy9g
00:22:54.645   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:22:54.645   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y:
00:22:54.645   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.C6e8ioS86X
00:22:54.645   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.t8hUYPGy9g
00:22:54.645   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13
00:22:54.903   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init
00:22:55.162   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.C6e8ioS86X
00:22:55.162   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.C6e8ioS86X
00:22:55.162   00:04:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:22:55.421  [2024-12-10 00:04:11.038975] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:22:55.421   00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:22:55.421   00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:22:55.679  [2024-12-10 00:04:11.395875] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:22:55.679  [2024-12-10 00:04:11.396060] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:22:55.679   00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:22:55.937  malloc0
00:22:55.937   00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:22:55.937   00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.C6e8ioS86X
00:22:56.195   00:04:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:22:56.454   00:04:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.C6e8ioS86X
00:23:06.444  Initializing NVMe Controllers
00:23:06.444  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:23:06.444  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:23:06.444  Initialization complete. Launching workers.
00:23:06.444  ========================================================
00:23:06.444                                                                                                               Latency(us)
00:23:06.444  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:23:06.444  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:   16940.15      66.17    3778.09     819.91    5546.21
00:23:06.444  ========================================================
00:23:06.444  Total                                                                    :   16940.15      66.17    3778.09     819.91    5546.21
00:23:06.444  
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.C6e8ioS86X
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.C6e8ioS86X
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3106317
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3106317 /var/tmp/bdevperf.sock
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3106317 ']'
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:06.444   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:23:06.444  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:23:06.445   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:06.445   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:06.703  [2024-12-10 00:04:22.308500] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:06.703  [2024-12-10 00:04:22.308549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3106317 ]
00:23:06.703  [2024-12-10 00:04:22.382378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:06.703  [2024-12-10 00:04:22.423189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:23:06.703   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:06.703   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:06.703   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.C6e8ioS86X
00:23:06.961   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:23:07.219  [2024-12-10 00:04:22.871871] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:23:07.219  TLSTESTn1
00:23:07.219   00:04:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests
00:23:07.219  Running I/O for 10 seconds...
00:23:09.225       5480.00 IOPS,    21.41 MiB/s
[2024-12-09T23:04:26.459Z]      5504.00 IOPS,    21.50 MiB/s
[2024-12-09T23:04:27.394Z]      5533.00 IOPS,    21.61 MiB/s
[2024-12-09T23:04:28.329Z]      5566.00 IOPS,    21.74 MiB/s
[2024-12-09T23:04:29.263Z]      5579.40 IOPS,    21.79 MiB/s
[2024-12-09T23:04:30.198Z]      5587.17 IOPS,    21.82 MiB/s
[2024-12-09T23:04:31.134Z]      5570.43 IOPS,    21.76 MiB/s
[2024-12-09T23:04:32.082Z]      5566.25 IOPS,    21.74 MiB/s
[2024-12-09T23:04:33.467Z]      5576.89 IOPS,    21.78 MiB/s
[2024-12-09T23:04:33.467Z]      5557.40 IOPS,    21.71 MiB/s
00:23:17.610                                                                                                  Latency(us)
00:23:17.610  
[2024-12-09T23:04:33.467Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:17.610  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:23:17.610  	 Verification LBA range: start 0x0 length 0x2000
00:23:17.610  	 TLSTESTn1           :      10.01    5562.32      21.73       0.00     0.00   22977.66    5617.37   23218.47
00:23:17.610  
[2024-12-09T23:04:33.467Z]  ===================================================================================================================
00:23:17.610  
[2024-12-09T23:04:33.467Z]  Total                       :               5562.32      21.73       0.00     0.00   22977.66    5617.37   23218.47
00:23:17.610  {
00:23:17.610    "results": [
00:23:17.610      {
00:23:17.610        "job": "TLSTESTn1",
00:23:17.610        "core_mask": "0x4",
00:23:17.610        "workload": "verify",
00:23:17.610        "status": "finished",
00:23:17.610        "verify_range": {
00:23:17.610          "start": 0,
00:23:17.610          "length": 8192
00:23:17.610        },
00:23:17.610        "queue_depth": 128,
00:23:17.610        "io_size": 4096,
00:23:17.610        "runtime": 10.013992,
00:23:17.610        "iops": 5562.317205765693,
00:23:17.610        "mibps": 21.727801585022238,
00:23:17.610        "io_failed": 0,
00:23:17.610        "io_timeout": 0,
00:23:17.610        "avg_latency_us": 22977.662692009464,
00:23:17.610        "min_latency_us": 5617.371428571429,
00:23:17.610        "max_latency_us": 23218.46857142857
00:23:17.610      }
00:23:17.610    ],
00:23:17.610    "core_count": 1
00:23:17.610  }
00:23:17.610   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:23:17.610   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3106317
00:23:17.610   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3106317 ']'
00:23:17.610   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3106317
00:23:17.610    00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:17.610   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:17.610    00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3106317
00:23:17.610   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:23:17.610   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:23:17.610   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3106317'
00:23:17.610  killing process with pid 3106317
00:23:17.610   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3106317
00:23:17.610  Received shutdown signal, test time was about 10.000000 seconds
00:23:17.610  
00:23:17.610                                                                                                  Latency(us)
00:23:17.610  
[2024-12-09T23:04:33.467Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:17.610  
[2024-12-09T23:04:33.467Z]  ===================================================================================================================
00:23:17.610  
[2024-12-09T23:04:33.467Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:23:17.610   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3106317
00:23:17.610   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.t8hUYPGy9g
00:23:17.610   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:23:17.610   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.t8hUYPGy9g
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:17.611    00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.t8hUYPGy9g
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.t8hUYPGy9g
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3108048
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3108048 /var/tmp/bdevperf.sock
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3108048 ']'
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:23:17.611  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:17.611   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:17.611  [2024-12-10 00:04:33.374522] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:17.611  [2024-12-10 00:04:33.374568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3108048 ]
00:23:17.611  [2024-12-10 00:04:33.448725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:17.868  [2024-12-10 00:04:33.487697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:23:17.868   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:17.868   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:17.868   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.t8hUYPGy9g
00:23:18.126   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:23:18.126  [2024-12-10 00:04:33.951794] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:23:18.126  [2024-12-10 00:04:33.963400] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:23:18.126  [2024-12-10 00:04:33.964091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1868410 (107): Transport endpoint is not connected
00:23:18.126  [2024-12-10 00:04:33.965084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1868410 (9): Bad file descriptor
00:23:18.126  [2024-12-10 00:04:33.966086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state
00:23:18.126  [2024-12-10 00:04:33.966099] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2
00:23:18.126  [2024-12-10 00:04:33.966106] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted
00:23:18.126  [2024-12-10 00:04:33.966114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state.
00:23:18.126  request:
00:23:18.126  {
00:23:18.126    "name": "TLSTEST",
00:23:18.126    "trtype": "tcp",
00:23:18.126    "traddr": "10.0.0.2",
00:23:18.126    "adrfam": "ipv4",
00:23:18.126    "trsvcid": "4420",
00:23:18.126    "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:23:18.126    "hostnqn": "nqn.2016-06.io.spdk:host1",
00:23:18.126    "prchk_reftag": false,
00:23:18.126    "prchk_guard": false,
00:23:18.126    "hdgst": false,
00:23:18.126    "ddgst": false,
00:23:18.126    "psk": "key0",
00:23:18.126    "allow_unrecognized_csi": false,
00:23:18.126    "method": "bdev_nvme_attach_controller",
00:23:18.126    "req_id": 1
00:23:18.126  }
00:23:18.126  Got JSON-RPC error response
00:23:18.126  response:
00:23:18.126  {
00:23:18.126    "code": -5,
00:23:18.126    "message": "Input/output error"
00:23:18.126  }
00:23:18.385   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3108048
00:23:18.385   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3108048 ']'
00:23:18.385   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3108048
00:23:18.385    00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:18.385   00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:18.385    00:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3108048
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3108048'
00:23:18.385  killing process with pid 3108048
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3108048
00:23:18.385  Received shutdown signal, test time was about 10.000000 seconds
00:23:18.385  
00:23:18.385                                                                                                  Latency(us)
00:23:18.385  
[2024-12-09T23:04:34.242Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:18.385  
[2024-12-09T23:04:34.242Z]  ===================================================================================================================
00:23:18.385  
[2024-12-09T23:04:34.242Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3108048
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.C6e8ioS86X
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.C6e8ioS86X
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:18.385    00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.C6e8ioS86X
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.C6e8ioS86X
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3108225
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3108225 /var/tmp/bdevperf.sock
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3108225 ']'
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:23:18.385  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:18.385   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:18.643  [2024-12-10 00:04:34.243762] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:18.643  [2024-12-10 00:04:34.243812] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3108225 ]
00:23:18.643  [2024-12-10 00:04:34.311537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:18.643  [2024-12-10 00:04:34.348164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:23:18.643   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:18.643   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:18.643   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.C6e8ioS86X
00:23:18.901   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0
00:23:19.160  [2024-12-10 00:04:34.823708] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:23:19.160  [2024-12-10 00:04:34.828170] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1
00:23:19.160  [2024-12-10 00:04:34.828192] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1
00:23:19.160  [2024-12-10 00:04:34.828215] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:23:19.160  [2024-12-10 00:04:34.828870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662410 (107): Transport endpoint is not connected
00:23:19.160  [2024-12-10 00:04:34.829862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662410 (9): Bad file descriptor
00:23:19.160  [2024-12-10 00:04:34.830863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state
00:23:19.160  [2024-12-10 00:04:34.830873] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2
00:23:19.160  [2024-12-10 00:04:34.830881] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted
00:23:19.160  [2024-12-10 00:04:34.830892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state.
00:23:19.161  request:
00:23:19.161  {
00:23:19.161    "name": "TLSTEST",
00:23:19.161    "trtype": "tcp",
00:23:19.161    "traddr": "10.0.0.2",
00:23:19.161    "adrfam": "ipv4",
00:23:19.161    "trsvcid": "4420",
00:23:19.161    "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:23:19.161    "hostnqn": "nqn.2016-06.io.spdk:host2",
00:23:19.161    "prchk_reftag": false,
00:23:19.161    "prchk_guard": false,
00:23:19.161    "hdgst": false,
00:23:19.161    "ddgst": false,
00:23:19.161    "psk": "key0",
00:23:19.161    "allow_unrecognized_csi": false,
00:23:19.161    "method": "bdev_nvme_attach_controller",
00:23:19.161    "req_id": 1
00:23:19.161  }
00:23:19.161  Got JSON-RPC error response
00:23:19.161  response:
00:23:19.161  {
00:23:19.161    "code": -5,
00:23:19.161    "message": "Input/output error"
00:23:19.161  }
00:23:19.161   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3108225
00:23:19.161   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3108225 ']'
00:23:19.161   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3108225
00:23:19.161    00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:19.161   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:19.161    00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3108225
00:23:19.161   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:23:19.161   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:23:19.161   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3108225'
00:23:19.161  killing process with pid 3108225
00:23:19.161   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3108225
00:23:19.161  Received shutdown signal, test time was about 10.000000 seconds
00:23:19.161  
00:23:19.161                                                                                                  Latency(us)
00:23:19.161  
[2024-12-09T23:04:35.018Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:19.161  
[2024-12-09T23:04:35.018Z]  ===================================================================================================================
00:23:19.161  
[2024-12-09T23:04:35.018Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:23:19.161   00:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3108225
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.C6e8ioS86X
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.C6e8ioS86X
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:19.419    00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.C6e8ioS86X
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.C6e8ioS86X
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3108447
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3108447 /var/tmp/bdevperf.sock
00:23:19.419   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3108447 ']'
00:23:19.420   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:23:19.420   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:19.420   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:23:19.420  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:23:19.420   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:19.420   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:19.420  [2024-12-10 00:04:35.113604] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:19.420  [2024-12-10 00:04:35.113655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3108447 ]
00:23:19.420  [2024-12-10 00:04:35.187106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:19.420  [2024-12-10 00:04:35.223837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:23:19.678   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:19.678   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:19.678   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.C6e8ioS86X
00:23:19.678   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0
00:23:19.936  [2024-12-10 00:04:35.680038] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:23:19.936  [2024-12-10 00:04:35.687697] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2
00:23:19.936  [2024-12-10 00:04:35.687717] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2
00:23:19.936  [2024-12-10 00:04:35.687740] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:23:19.936  [2024-12-10 00:04:35.688312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1919410 (107): Transport endpoint is not connected
00:23:19.936  [2024-12-10 00:04:35.689305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1919410 (9): Bad file descriptor
00:23:19.936  [2024-12-10 00:04:35.690307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state
00:23:19.936  [2024-12-10 00:04:35.690321] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2
00:23:19.936  [2024-12-10 00:04:35.690328] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted
00:23:19.936  [2024-12-10 00:04:35.690335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state.
00:23:19.936  request:
00:23:19.936  {
00:23:19.936    "name": "TLSTEST",
00:23:19.936    "trtype": "tcp",
00:23:19.936    "traddr": "10.0.0.2",
00:23:19.936    "adrfam": "ipv4",
00:23:19.936    "trsvcid": "4420",
00:23:19.936    "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:23:19.936    "hostnqn": "nqn.2016-06.io.spdk:host1",
00:23:19.936    "prchk_reftag": false,
00:23:19.936    "prchk_guard": false,
00:23:19.936    "hdgst": false,
00:23:19.936    "ddgst": false,
00:23:19.936    "psk": "key0",
00:23:19.936    "allow_unrecognized_csi": false,
00:23:19.936    "method": "bdev_nvme_attach_controller",
00:23:19.936    "req_id": 1
00:23:19.936  }
00:23:19.936  Got JSON-RPC error response
00:23:19.936  response:
00:23:19.936  {
00:23:19.936    "code": -5,
00:23:19.936    "message": "Input/output error"
00:23:19.936  }
00:23:19.936   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3108447
00:23:19.936   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3108447 ']'
00:23:19.936   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3108447
00:23:19.936    00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:19.936   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:19.936    00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3108447
00:23:19.936   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:23:19.936   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:23:19.936   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3108447'
00:23:19.936  killing process with pid 3108447
00:23:19.936   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3108447
00:23:19.936  Received shutdown signal, test time was about 10.000000 seconds
00:23:19.936  
00:23:19.936                                                                                                  Latency(us)
00:23:19.936  
[2024-12-09T23:04:35.793Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:19.936  
[2024-12-09T23:04:35.793Z]  ===================================================================================================================
00:23:19.936  
[2024-12-09T23:04:35.793Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:23:19.936   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3108447
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 ''
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 ''
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:20.195    00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 ''
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3108470
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3108470 /var/tmp/bdevperf.sock
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3108470 ']'
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:23:20.195  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:20.195   00:04:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:20.195  [2024-12-10 00:04:35.964789] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:20.195  [2024-12-10 00:04:35.964838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3108470 ]
00:23:20.195  [2024-12-10 00:04:36.038141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:20.454  [2024-12-10 00:04:36.075623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:23:20.454   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:20.454   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:20.454   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 ''
00:23:20.712  [2024-12-10 00:04:36.350619] keyring.c:  24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 
00:23:20.712  [2024-12-10 00:04:36.350651] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:23:20.712  request:
00:23:20.712  {
00:23:20.712    "name": "key0",
00:23:20.712    "path": "",
00:23:20.712    "method": "keyring_file_add_key",
00:23:20.712    "req_id": 1
00:23:20.712  }
00:23:20.712  Got JSON-RPC error response
00:23:20.712  response:
00:23:20.712  {
00:23:20.712    "code": -1,
00:23:20.712    "message": "Operation not permitted"
00:23:20.712  }
00:23:20.712   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:23:20.712  [2024-12-10 00:04:36.535183] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:23:20.712  [2024-12-10 00:04:36.535216] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0
00:23:20.712  request:
00:23:20.712  {
00:23:20.712    "name": "TLSTEST",
00:23:20.712    "trtype": "tcp",
00:23:20.712    "traddr": "10.0.0.2",
00:23:20.712    "adrfam": "ipv4",
00:23:20.712    "trsvcid": "4420",
00:23:20.712    "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:23:20.712    "hostnqn": "nqn.2016-06.io.spdk:host1",
00:23:20.712    "prchk_reftag": false,
00:23:20.712    "prchk_guard": false,
00:23:20.712    "hdgst": false,
00:23:20.712    "ddgst": false,
00:23:20.712    "psk": "key0",
00:23:20.712    "allow_unrecognized_csi": false,
00:23:20.712    "method": "bdev_nvme_attach_controller",
00:23:20.712    "req_id": 1
00:23:20.712  }
00:23:20.712  Got JSON-RPC error response
00:23:20.712  response:
00:23:20.712  {
00:23:20.712    "code": -126,
00:23:20.712    "message": "Required key not available"
00:23:20.712  }
00:23:20.712   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3108470
00:23:20.712   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3108470 ']'
00:23:20.712   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3108470
00:23:20.712    00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:20.971    00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3108470
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3108470'
00:23:20.971  killing process with pid 3108470
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3108470
00:23:20.971  Received shutdown signal, test time was about 10.000000 seconds
00:23:20.971  
00:23:20.971                                                                                                  Latency(us)
00:23:20.971  
[2024-12-09T23:04:36.828Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:20.971  
[2024-12-09T23:04:36.828Z]  ===================================================================================================================
00:23:20.971  
[2024-12-09T23:04:36.828Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3108470
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3103917
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3103917 ']'
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3103917
00:23:20.971    00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:20.971    00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3103917
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3103917'
00:23:20.971  killing process with pid 3103917
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3103917
00:23:20.971   00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3103917
00:23:21.230    00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2
00:23:21.230    00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2
00:23:21.230    00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest
00:23:21.230    00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:23:21.230    00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677
00:23:21.230    00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2
00:23:21.230    00:04:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python -
00:23:21.230   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==:
00:23:21.230    00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp
00:23:21.230   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.wYWtJyZDQX
00:23:21.230   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==:
00:23:21.230   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.wYWtJyZDQX
00:23:21.230   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2
00:23:21.230   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:23:21.231   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:23:21.231   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:21.231   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3108708
00:23:21.231   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:23:21.231   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3108708
00:23:21.231   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3108708 ']'
00:23:21.231   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:21.231   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:21.231   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:21.231  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:21.231   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:21.231   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:21.488  [2024-12-10 00:04:37.089796] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:21.488  [2024-12-10 00:04:37.089843] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:21.488  [2024-12-10 00:04:37.166459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:21.489  [2024-12-10 00:04:37.201119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:23:21.489  [2024-12-10 00:04:37.201154] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:23:21.489  [2024-12-10 00:04:37.201161] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:23:21.489  [2024-12-10 00:04:37.201173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:23:21.489  [2024-12-10 00:04:37.201178] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:23:21.489  [2024-12-10 00:04:37.201688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:23:21.489   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:21.489   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:21.489   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:23:21.489   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:23:21.489   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:21.489   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:23:21.489   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.wYWtJyZDQX
00:23:21.489   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wYWtJyZDQX
00:23:21.489   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:23:21.747  [2024-12-10 00:04:37.508932] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:23:21.747   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:23:22.005   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:23:22.263  [2024-12-10 00:04:37.917982] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:23:22.264  [2024-12-10 00:04:37.918181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:23:22.264   00:04:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:23:22.264  malloc0
00:23:22.522   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:23:22.522   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wYWtJyZDQX
00:23:22.781   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:23:23.038   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wYWtJyZDQX
00:23:23.038   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:23:23.038   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:23:23.038   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:23:23.039   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wYWtJyZDQX
00:23:23.039   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:23:23.039   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:23:23.039   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3108956
00:23:23.039   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:23:23.039   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3108956 /var/tmp/bdevperf.sock
00:23:23.039   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3108956 ']'
00:23:23.039   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:23:23.039   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:23.039   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:23:23.039  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:23:23.039   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:23.039   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:23.039  [2024-12-10 00:04:38.746770] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:23.039  [2024-12-10 00:04:38.746823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3108956 ]
00:23:23.039  [2024-12-10 00:04:38.823524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:23.039  [2024-12-10 00:04:38.863098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:23:23.297   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:23.297   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:23.297   00:04:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wYWtJyZDQX
00:23:23.555   00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:23:23.555  [2024-12-10 00:04:39.331466] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:23:23.555  TLSTESTn1
00:23:23.835   00:04:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests
00:23:23.835  Running I/O for 10 seconds...
00:23:25.717       5345.00 IOPS,    20.88 MiB/s
[2024-12-09T23:04:42.952Z]      5452.50 IOPS,    21.30 MiB/s
[2024-12-09T23:04:43.888Z]      5497.33 IOPS,    21.47 MiB/s
[2024-12-09T23:04:44.833Z]      5468.00 IOPS,    21.36 MiB/s
[2024-12-09T23:04:45.767Z]      5493.00 IOPS,    21.46 MiB/s
[2024-12-09T23:04:46.701Z]      5524.33 IOPS,    21.58 MiB/s
[2024-12-09T23:04:47.636Z]      5543.71 IOPS,    21.66 MiB/s
[2024-12-09T23:04:48.571Z]      5547.62 IOPS,    21.67 MiB/s
[2024-12-09T23:04:49.951Z]      5539.44 IOPS,    21.64 MiB/s
[2024-12-09T23:04:49.951Z]      5541.60 IOPS,    21.65 MiB/s
00:23:34.094                                                                                                  Latency(us)
00:23:34.094  
[2024-12-09T23:04:49.951Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:34.094  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:23:34.094  	 Verification LBA range: start 0x0 length 0x2000
00:23:34.094  	 TLSTESTn1           :      10.01    5546.41      21.67       0.00     0.00   23042.77    6647.22   23343.30
00:23:34.094  
[2024-12-09T23:04:49.951Z]  ===================================================================================================================
00:23:34.094  
[2024-12-09T23:04:49.951Z]  Total                       :               5546.41      21.67       0.00     0.00   23042.77    6647.22   23343.30
00:23:34.094  {
00:23:34.094    "results": [
00:23:34.094      {
00:23:34.094        "job": "TLSTESTn1",
00:23:34.094        "core_mask": "0x4",
00:23:34.094        "workload": "verify",
00:23:34.094        "status": "finished",
00:23:34.094        "verify_range": {
00:23:34.094          "start": 0,
00:23:34.094          "length": 8192
00:23:34.094        },
00:23:34.094        "queue_depth": 128,
00:23:34.094        "io_size": 4096,
00:23:34.094        "runtime": 10.014226,
00:23:34.094        "iops": 5546.409677592656,
00:23:34.094        "mibps": 21.665662803096314,
00:23:34.094        "io_failed": 0,
00:23:34.094        "io_timeout": 0,
00:23:34.094        "avg_latency_us": 23042.765512777314,
00:23:34.094        "min_latency_us": 6647.222857142857,
00:23:34.094        "max_latency_us": 23343.299047619046
00:23:34.094      }
00:23:34.094    ],
00:23:34.094    "core_count": 1
00:23:34.094  }
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3108956
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3108956 ']'
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3108956
00:23:34.094    00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:34.094    00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3108956
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3108956'
00:23:34.094  killing process with pid 3108956
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3108956
00:23:34.094  Received shutdown signal, test time was about 10.000000 seconds
00:23:34.094  
00:23:34.094                                                                                                  Latency(us)
00:23:34.094  
[2024-12-09T23:04:49.951Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:34.094  
[2024-12-09T23:04:49.951Z]  ===================================================================================================================
00:23:34.094  
[2024-12-09T23:04:49.951Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3108956
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.wYWtJyZDQX
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wYWtJyZDQX
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wYWtJyZDQX
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:34.094    00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wYWtJyZDQX
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wYWtJyZDQX
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3110742
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3110742 /var/tmp/bdevperf.sock
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3110742 ']'
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:23:34.094  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:34.094   00:04:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:34.094  [2024-12-10 00:04:49.847366] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:34.094  [2024-12-10 00:04:49.847413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3110742 ]
00:23:34.094  [2024-12-10 00:04:49.914465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:34.094  [2024-12-10 00:04:49.950892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:23:34.354   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:34.354   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:34.354   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wYWtJyZDQX
00:23:34.612  [2024-12-10 00:04:50.235392] keyring.c:  36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wYWtJyZDQX': 0100666
00:23:34.612  [2024-12-10 00:04:50.235420] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:23:34.612  request:
00:23:34.612  {
00:23:34.612    "name": "key0",
00:23:34.612    "path": "/tmp/tmp.wYWtJyZDQX",
00:23:34.612    "method": "keyring_file_add_key",
00:23:34.612    "req_id": 1
00:23:34.612  }
00:23:34.612  Got JSON-RPC error response
00:23:34.612  response:
00:23:34.612  {
00:23:34.612    "code": -1,
00:23:34.612    "message": "Operation not permitted"
00:23:34.612  }
00:23:34.612   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:23:34.612  [2024-12-10 00:04:50.435985] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:23:34.612  [2024-12-10 00:04:50.436013] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0
00:23:34.612  request:
00:23:34.612  {
00:23:34.612    "name": "TLSTEST",
00:23:34.612    "trtype": "tcp",
00:23:34.612    "traddr": "10.0.0.2",
00:23:34.612    "adrfam": "ipv4",
00:23:34.612    "trsvcid": "4420",
00:23:34.612    "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:23:34.612    "hostnqn": "nqn.2016-06.io.spdk:host1",
00:23:34.612    "prchk_reftag": false,
00:23:34.612    "prchk_guard": false,
00:23:34.612    "hdgst": false,
00:23:34.612    "ddgst": false,
00:23:34.612    "psk": "key0",
00:23:34.612    "allow_unrecognized_csi": false,
00:23:34.612    "method": "bdev_nvme_attach_controller",
00:23:34.612    "req_id": 1
00:23:34.612  }
00:23:34.612  Got JSON-RPC error response
00:23:34.612  response:
00:23:34.612  {
00:23:34.612    "code": -126,
00:23:34.612    "message": "Required key not available"
00:23:34.612  }
00:23:34.612   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3110742
00:23:34.612   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3110742 ']'
00:23:34.612   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3110742
00:23:34.612    00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:34.871    00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3110742
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3110742'
00:23:34.871  killing process with pid 3110742
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3110742
00:23:34.871  Received shutdown signal, test time was about 10.000000 seconds
00:23:34.871  
00:23:34.871                                                                                                  Latency(us)
00:23:34.871  
[2024-12-09T23:04:50.728Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:34.871  
[2024-12-09T23:04:50.728Z]  ===================================================================================================================
00:23:34.871  
[2024-12-09T23:04:50.728Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3110742
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3108708
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3108708 ']'
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3108708
00:23:34.871    00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:34.871    00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3108708
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:23:34.871   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:23:34.872   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3108708'
00:23:34.872  killing process with pid 3108708
00:23:34.872   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3108708
00:23:34.872   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3108708
00:23:35.131   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2
00:23:35.131   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:23:35.131   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:23:35.131   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:35.131   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3110976
00:23:35.131   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:23:35.131   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3110976
00:23:35.131   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3110976 ']'
00:23:35.131   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:35.131   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:35.131   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:35.131  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:35.131   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:35.131   00:04:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:35.131  [2024-12-10 00:04:50.936829] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:35.131  [2024-12-10 00:04:50.936874] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:35.388  [2024-12-10 00:04:51.013826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:35.388  [2024-12-10 00:04:51.052251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:23:35.388  [2024-12-10 00:04:51.052286] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:23:35.388  [2024-12-10 00:04:51.052293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:23:35.388  [2024-12-10 00:04:51.052299] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:23:35.388  [2024-12-10 00:04:51.052305] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:23:35.388  [2024-12-10 00:04:51.052807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:23:35.388   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:35.388   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:35.388   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:23:35.388   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:23:35.388   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:35.388   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:23:35.388   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.wYWtJyZDQX
00:23:35.388   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0
00:23:35.389   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.wYWtJyZDQX
00:23:35.389   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt
00:23:35.389   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:35.389    00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt
00:23:35.389   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:23:35.389   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.wYWtJyZDQX
00:23:35.389   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wYWtJyZDQX
00:23:35.389   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:23:35.649  [2024-12-10 00:04:51.369068] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:23:35.649   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:23:35.907   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:23:36.166  [2024-12-10 00:04:51.782121] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:23:36.166  [2024-12-10 00:04:51.782316] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:23:36.166   00:04:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:23:36.166  malloc0
00:23:36.166   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:23:36.424   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wYWtJyZDQX
00:23:36.682  [2024-12-10 00:04:52.371526] keyring.c:  36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.wYWtJyZDQX': 0100666
00:23:36.682  [2024-12-10 00:04:52.371550] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:23:36.682  request:
00:23:36.682  {
00:23:36.682    "name": "key0",
00:23:36.682    "path": "/tmp/tmp.wYWtJyZDQX",
00:23:36.682    "method": "keyring_file_add_key",
00:23:36.682    "req_id": 1
00:23:36.682  }
00:23:36.682  Got JSON-RPC error response
00:23:36.682  response:
00:23:36.682  {
00:23:36.682    "code": -1,
00:23:36.682    "message": "Operation not permitted"
00:23:36.682  }
00:23:36.683   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:23:36.939  [2024-12-10 00:04:52.556022] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist
00:23:36.939  [2024-12-10 00:04:52.556052] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport
00:23:36.939  request:
00:23:36.939  {
00:23:36.939    "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:36.939    "host": "nqn.2016-06.io.spdk:host1",
00:23:36.939    "psk": "key0",
00:23:36.939    "method": "nvmf_subsystem_add_host",
00:23:36.939    "req_id": 1
00:23:36.939  }
00:23:36.939  Got JSON-RPC error response
00:23:36.939  response:
00:23:36.939  {
00:23:36.939    "code": -32603,
00:23:36.939    "message": "Internal error"
00:23:36.939  }
00:23:36.939   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1
00:23:36.939   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:23:36.939   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:23:36.939   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:23:36.939   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3110976
00:23:36.939   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3110976 ']'
00:23:36.939   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3110976
00:23:36.939    00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:36.939   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:36.939    00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3110976
00:23:36.939   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:23:36.939   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:23:36.939   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3110976'
00:23:36.939  killing process with pid 3110976
00:23:36.939   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3110976
00:23:36.940   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3110976
00:23:37.197   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.wYWtJyZDQX
00:23:37.197   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2
00:23:37.197   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:23:37.197   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:23:37.197   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:37.197   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3111307
00:23:37.197   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:23:37.197   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3111307
00:23:37.197   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3111307 ']'
00:23:37.197   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:37.197   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:37.197   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:37.198  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:37.198   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:37.198   00:04:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:37.198  [2024-12-10 00:04:52.856717] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:37.198  [2024-12-10 00:04:52.856764] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:37.198  [2024-12-10 00:04:52.936486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:37.198  [2024-12-10 00:04:52.972765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:23:37.198  [2024-12-10 00:04:52.972802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:23:37.198  [2024-12-10 00:04:52.972809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:23:37.198  [2024-12-10 00:04:52.972815] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:23:37.198  [2024-12-10 00:04:52.972820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:23:37.198  [2024-12-10 00:04:52.973322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:23:37.456   00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:37.456   00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:37.456   00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:23:37.456   00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:23:37.456   00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:37.456   00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:23:37.456   00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.wYWtJyZDQX
00:23:37.456   00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wYWtJyZDQX
00:23:37.456   00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:23:37.456  [2024-12-10 00:04:53.277113] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:23:37.457   00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:23:37.723   00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:23:37.988  [2024-12-10 00:04:53.666106] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:23:37.988  [2024-12-10 00:04:53.666305] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:23:37.988   00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:23:38.246  malloc0
00:23:38.246   00:04:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:23:38.246   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wYWtJyZDQX
00:23:38.505   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:23:38.764   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:23:38.764   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3111701
00:23:38.764   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:23:38.764   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3111701 /var/tmp/bdevperf.sock
00:23:38.764   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3111701 ']'
00:23:38.764   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:23:38.764   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:38.764   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:23:38.764  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:23:38.764   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:38.764   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:38.764  [2024-12-10 00:04:54.493757] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:38.764  [2024-12-10 00:04:54.493804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3111701 ]
00:23:38.764  [2024-12-10 00:04:54.567395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:38.764  [2024-12-10 00:04:54.606803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:23:39.023   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:39.023   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:39.023   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wYWtJyZDQX
00:23:39.287   00:04:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:23:39.287  [2024-12-10 00:04:55.050861] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:23:39.287  TLSTESTn1
00:23:39.546    00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config
00:23:39.805   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{
00:23:39.805    "subsystems": [
00:23:39.805      {
00:23:39.805        "subsystem": "keyring",
00:23:39.805        "config": [
00:23:39.805          {
00:23:39.805            "method": "keyring_file_add_key",
00:23:39.805            "params": {
00:23:39.805              "name": "key0",
00:23:39.805              "path": "/tmp/tmp.wYWtJyZDQX"
00:23:39.805            }
00:23:39.805          }
00:23:39.805        ]
00:23:39.805      },
00:23:39.805      {
00:23:39.805        "subsystem": "iobuf",
00:23:39.805        "config": [
00:23:39.805          {
00:23:39.805            "method": "iobuf_set_options",
00:23:39.805            "params": {
00:23:39.805              "small_pool_count": 8192,
00:23:39.805              "large_pool_count": 1024,
00:23:39.805              "small_bufsize": 8192,
00:23:39.805              "large_bufsize": 135168,
00:23:39.805              "enable_numa": false
00:23:39.805            }
00:23:39.805          }
00:23:39.805        ]
00:23:39.805      },
00:23:39.805      {
00:23:39.805        "subsystem": "sock",
00:23:39.805        "config": [
00:23:39.805          {
00:23:39.805            "method": "sock_set_default_impl",
00:23:39.805            "params": {
00:23:39.805              "impl_name": "posix"
00:23:39.805            }
00:23:39.805          },
00:23:39.805          {
00:23:39.805            "method": "sock_impl_set_options",
00:23:39.805            "params": {
00:23:39.805              "impl_name": "ssl",
00:23:39.805              "recv_buf_size": 4096,
00:23:39.805              "send_buf_size": 4096,
00:23:39.805              "enable_recv_pipe": true,
00:23:39.805              "enable_quickack": false,
00:23:39.805              "enable_placement_id": 0,
00:23:39.805              "enable_zerocopy_send_server": true,
00:23:39.805              "enable_zerocopy_send_client": false,
00:23:39.805              "zerocopy_threshold": 0,
00:23:39.805              "tls_version": 0,
00:23:39.805              "enable_ktls": false
00:23:39.805            }
00:23:39.805          },
00:23:39.805          {
00:23:39.805            "method": "sock_impl_set_options",
00:23:39.805            "params": {
00:23:39.805              "impl_name": "posix",
00:23:39.805              "recv_buf_size": 2097152,
00:23:39.805              "send_buf_size": 2097152,
00:23:39.805              "enable_recv_pipe": true,
00:23:39.805              "enable_quickack": false,
00:23:39.805              "enable_placement_id": 0,
00:23:39.805              "enable_zerocopy_send_server": true,
00:23:39.805              "enable_zerocopy_send_client": false,
00:23:39.805              "zerocopy_threshold": 0,
00:23:39.805              "tls_version": 0,
00:23:39.805              "enable_ktls": false
00:23:39.805            }
00:23:39.805          }
00:23:39.805        ]
00:23:39.805      },
00:23:39.805      {
00:23:39.805        "subsystem": "vmd",
00:23:39.805        "config": []
00:23:39.805      },
00:23:39.805      {
00:23:39.805        "subsystem": "accel",
00:23:39.805        "config": [
00:23:39.805          {
00:23:39.805            "method": "accel_set_options",
00:23:39.805            "params": {
00:23:39.805              "small_cache_size": 128,
00:23:39.805              "large_cache_size": 16,
00:23:39.805              "task_count": 2048,
00:23:39.805              "sequence_count": 2048,
00:23:39.805              "buf_count": 2048
00:23:39.805            }
00:23:39.805          }
00:23:39.805        ]
00:23:39.805      },
00:23:39.805      {
00:23:39.805        "subsystem": "bdev",
00:23:39.805        "config": [
00:23:39.805          {
00:23:39.805            "method": "bdev_set_options",
00:23:39.805            "params": {
00:23:39.805              "bdev_io_pool_size": 65535,
00:23:39.805              "bdev_io_cache_size": 256,
00:23:39.805              "bdev_auto_examine": true,
00:23:39.805              "iobuf_small_cache_size": 128,
00:23:39.805              "iobuf_large_cache_size": 16
00:23:39.805            }
00:23:39.805          },
00:23:39.805          {
00:23:39.805            "method": "bdev_raid_set_options",
00:23:39.805            "params": {
00:23:39.805              "process_window_size_kb": 1024,
00:23:39.805              "process_max_bandwidth_mb_sec": 0
00:23:39.805            }
00:23:39.805          },
00:23:39.805          {
00:23:39.805            "method": "bdev_iscsi_set_options",
00:23:39.805            "params": {
00:23:39.805              "timeout_sec": 30
00:23:39.805            }
00:23:39.805          },
00:23:39.805          {
00:23:39.805            "method": "bdev_nvme_set_options",
00:23:39.805            "params": {
00:23:39.805              "action_on_timeout": "none",
00:23:39.805              "timeout_us": 0,
00:23:39.805              "timeout_admin_us": 0,
00:23:39.805              "keep_alive_timeout_ms": 10000,
00:23:39.805              "arbitration_burst": 0,
00:23:39.805              "low_priority_weight": 0,
00:23:39.805              "medium_priority_weight": 0,
00:23:39.805              "high_priority_weight": 0,
00:23:39.805              "nvme_adminq_poll_period_us": 10000,
00:23:39.805              "nvme_ioq_poll_period_us": 0,
00:23:39.805              "io_queue_requests": 0,
00:23:39.805              "delay_cmd_submit": true,
00:23:39.805              "transport_retry_count": 4,
00:23:39.805              "bdev_retry_count": 3,
00:23:39.805              "transport_ack_timeout": 0,
00:23:39.805              "ctrlr_loss_timeout_sec": 0,
00:23:39.805              "reconnect_delay_sec": 0,
00:23:39.805              "fast_io_fail_timeout_sec": 0,
00:23:39.805              "disable_auto_failback": false,
00:23:39.805              "generate_uuids": false,
00:23:39.805              "transport_tos": 0,
00:23:39.805              "nvme_error_stat": false,
00:23:39.805              "rdma_srq_size": 0,
00:23:39.805              "io_path_stat": false,
00:23:39.805              "allow_accel_sequence": false,
00:23:39.805              "rdma_max_cq_size": 0,
00:23:39.805              "rdma_cm_event_timeout_ms": 0,
00:23:39.805              "dhchap_digests": [
00:23:39.805                "sha256",
00:23:39.805                "sha384",
00:23:39.805                "sha512"
00:23:39.805              ],
00:23:39.805              "dhchap_dhgroups": [
00:23:39.805                "null",
00:23:39.805                "ffdhe2048",
00:23:39.805                "ffdhe3072",
00:23:39.805                "ffdhe4096",
00:23:39.805                "ffdhe6144",
00:23:39.805                "ffdhe8192"
00:23:39.805              ]
00:23:39.805            }
00:23:39.805          },
00:23:39.805          {
00:23:39.805            "method": "bdev_nvme_set_hotplug",
00:23:39.805            "params": {
00:23:39.805              "period_us": 100000,
00:23:39.805              "enable": false
00:23:39.805            }
00:23:39.805          },
00:23:39.805          {
00:23:39.805            "method": "bdev_malloc_create",
00:23:39.805            "params": {
00:23:39.805              "name": "malloc0",
00:23:39.805              "num_blocks": 8192,
00:23:39.805              "block_size": 4096,
00:23:39.805              "physical_block_size": 4096,
00:23:39.805              "uuid": "2f6722c0-2873-4228-bbe1-199b57d57546",
00:23:39.805              "optimal_io_boundary": 0,
00:23:39.805              "md_size": 0,
00:23:39.805              "dif_type": 0,
00:23:39.805              "dif_is_head_of_md": false,
00:23:39.805              "dif_pi_format": 0
00:23:39.805            }
00:23:39.805          },
00:23:39.805          {
00:23:39.805            "method": "bdev_wait_for_examine"
00:23:39.805          }
00:23:39.805        ]
00:23:39.805      },
00:23:39.805      {
00:23:39.805        "subsystem": "nbd",
00:23:39.805        "config": []
00:23:39.805      },
00:23:39.805      {
00:23:39.805        "subsystem": "scheduler",
00:23:39.805        "config": [
00:23:39.805          {
00:23:39.805            "method": "framework_set_scheduler",
00:23:39.805            "params": {
00:23:39.805              "name": "static"
00:23:39.805            }
00:23:39.805          }
00:23:39.805        ]
00:23:39.805      },
00:23:39.806      {
00:23:39.806        "subsystem": "nvmf",
00:23:39.806        "config": [
00:23:39.806          {
00:23:39.806            "method": "nvmf_set_config",
00:23:39.806            "params": {
00:23:39.806              "discovery_filter": "match_any",
00:23:39.806              "admin_cmd_passthru": {
00:23:39.806                "identify_ctrlr": false
00:23:39.806              },
00:23:39.806              "dhchap_digests": [
00:23:39.806                "sha256",
00:23:39.806                "sha384",
00:23:39.806                "sha512"
00:23:39.806              ],
00:23:39.806              "dhchap_dhgroups": [
00:23:39.806                "null",
00:23:39.806                "ffdhe2048",
00:23:39.806                "ffdhe3072",
00:23:39.806                "ffdhe4096",
00:23:39.806                "ffdhe6144",
00:23:39.806                "ffdhe8192"
00:23:39.806              ]
00:23:39.806            }
00:23:39.806          },
00:23:39.806          {
00:23:39.806            "method": "nvmf_set_max_subsystems",
00:23:39.806            "params": {
00:23:39.806              "max_subsystems": 1024
00:23:39.806            }
00:23:39.806          },
00:23:39.806          {
00:23:39.806            "method": "nvmf_set_crdt",
00:23:39.806            "params": {
00:23:39.806              "crdt1": 0,
00:23:39.806              "crdt2": 0,
00:23:39.806              "crdt3": 0
00:23:39.806            }
00:23:39.806          },
00:23:39.806          {
00:23:39.806            "method": "nvmf_create_transport",
00:23:39.806            "params": {
00:23:39.806              "trtype": "TCP",
00:23:39.806              "max_queue_depth": 128,
00:23:39.806              "max_io_qpairs_per_ctrlr": 127,
00:23:39.806              "in_capsule_data_size": 4096,
00:23:39.806              "max_io_size": 131072,
00:23:39.806              "io_unit_size": 131072,
00:23:39.806              "max_aq_depth": 128,
00:23:39.806              "num_shared_buffers": 511,
00:23:39.806              "buf_cache_size": 4294967295,
00:23:39.806              "dif_insert_or_strip": false,
00:23:39.806              "zcopy": false,
00:23:39.806              "c2h_success": false,
00:23:39.806              "sock_priority": 0,
00:23:39.806              "abort_timeout_sec": 1,
00:23:39.806              "ack_timeout": 0,
00:23:39.806              "data_wr_pool_size": 0
00:23:39.806            }
00:23:39.806          },
00:23:39.806          {
00:23:39.806            "method": "nvmf_create_subsystem",
00:23:39.806            "params": {
00:23:39.806              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:39.806              "allow_any_host": false,
00:23:39.806              "serial_number": "SPDK00000000000001",
00:23:39.806              "model_number": "SPDK bdev Controller",
00:23:39.806              "max_namespaces": 10,
00:23:39.806              "min_cntlid": 1,
00:23:39.806              "max_cntlid": 65519,
00:23:39.806              "ana_reporting": false
00:23:39.806            }
00:23:39.806          },
00:23:39.806          {
00:23:39.806            "method": "nvmf_subsystem_add_host",
00:23:39.806            "params": {
00:23:39.806              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:39.806              "host": "nqn.2016-06.io.spdk:host1",
00:23:39.806              "psk": "key0"
00:23:39.806            }
00:23:39.806          },
00:23:39.806          {
00:23:39.806            "method": "nvmf_subsystem_add_ns",
00:23:39.806            "params": {
00:23:39.806              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:39.806              "namespace": {
00:23:39.806                "nsid": 1,
00:23:39.806                "bdev_name": "malloc0",
00:23:39.806                "nguid": "2F6722C028734228BBE1199B57D57546",
00:23:39.806                "uuid": "2f6722c0-2873-4228-bbe1-199b57d57546",
00:23:39.806                "no_auto_visible": false
00:23:39.806              }
00:23:39.806            }
00:23:39.806          },
00:23:39.806          {
00:23:39.806            "method": "nvmf_subsystem_add_listener",
00:23:39.806            "params": {
00:23:39.806              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:39.806              "listen_address": {
00:23:39.806                "trtype": "TCP",
00:23:39.806                "adrfam": "IPv4",
00:23:39.806                "traddr": "10.0.0.2",
00:23:39.806                "trsvcid": "4420"
00:23:39.806              },
00:23:39.806              "secure_channel": true
00:23:39.806            }
00:23:39.806          }
00:23:39.806        ]
00:23:39.806      }
00:23:39.806    ]
00:23:39.806  }'
00:23:39.806    00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config
00:23:40.065   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{
00:23:40.065    "subsystems": [
00:23:40.065      {
00:23:40.065        "subsystem": "keyring",
00:23:40.065        "config": [
00:23:40.065          {
00:23:40.065            "method": "keyring_file_add_key",
00:23:40.065            "params": {
00:23:40.065              "name": "key0",
00:23:40.065              "path": "/tmp/tmp.wYWtJyZDQX"
00:23:40.065            }
00:23:40.065          }
00:23:40.065        ]
00:23:40.065      },
00:23:40.065      {
00:23:40.065        "subsystem": "iobuf",
00:23:40.065        "config": [
00:23:40.065          {
00:23:40.065            "method": "iobuf_set_options",
00:23:40.065            "params": {
00:23:40.065              "small_pool_count": 8192,
00:23:40.065              "large_pool_count": 1024,
00:23:40.065              "small_bufsize": 8192,
00:23:40.065              "large_bufsize": 135168,
00:23:40.065              "enable_numa": false
00:23:40.065            }
00:23:40.065          }
00:23:40.065        ]
00:23:40.065      },
00:23:40.065      {
00:23:40.065        "subsystem": "sock",
00:23:40.065        "config": [
00:23:40.065          {
00:23:40.065            "method": "sock_set_default_impl",
00:23:40.065            "params": {
00:23:40.065              "impl_name": "posix"
00:23:40.065            }
00:23:40.065          },
00:23:40.065          {
00:23:40.065            "method": "sock_impl_set_options",
00:23:40.065            "params": {
00:23:40.065              "impl_name": "ssl",
00:23:40.065              "recv_buf_size": 4096,
00:23:40.065              "send_buf_size": 4096,
00:23:40.065              "enable_recv_pipe": true,
00:23:40.065              "enable_quickack": false,
00:23:40.065              "enable_placement_id": 0,
00:23:40.065              "enable_zerocopy_send_server": true,
00:23:40.065              "enable_zerocopy_send_client": false,
00:23:40.065              "zerocopy_threshold": 0,
00:23:40.065              "tls_version": 0,
00:23:40.065              "enable_ktls": false
00:23:40.065            }
00:23:40.065          },
00:23:40.065          {
00:23:40.065            "method": "sock_impl_set_options",
00:23:40.065            "params": {
00:23:40.065              "impl_name": "posix",
00:23:40.065              "recv_buf_size": 2097152,
00:23:40.065              "send_buf_size": 2097152,
00:23:40.065              "enable_recv_pipe": true,
00:23:40.065              "enable_quickack": false,
00:23:40.065              "enable_placement_id": 0,
00:23:40.065              "enable_zerocopy_send_server": true,
00:23:40.065              "enable_zerocopy_send_client": false,
00:23:40.065              "zerocopy_threshold": 0,
00:23:40.065              "tls_version": 0,
00:23:40.065              "enable_ktls": false
00:23:40.065            }
00:23:40.065          }
00:23:40.065        ]
00:23:40.065      },
00:23:40.065      {
00:23:40.065        "subsystem": "vmd",
00:23:40.065        "config": []
00:23:40.065      },
00:23:40.065      {
00:23:40.065        "subsystem": "accel",
00:23:40.065        "config": [
00:23:40.065          {
00:23:40.065            "method": "accel_set_options",
00:23:40.065            "params": {
00:23:40.065              "small_cache_size": 128,
00:23:40.065              "large_cache_size": 16,
00:23:40.065              "task_count": 2048,
00:23:40.065              "sequence_count": 2048,
00:23:40.065              "buf_count": 2048
00:23:40.065            }
00:23:40.065          }
00:23:40.065        ]
00:23:40.065      },
00:23:40.065      {
00:23:40.065        "subsystem": "bdev",
00:23:40.065        "config": [
00:23:40.065          {
00:23:40.065            "method": "bdev_set_options",
00:23:40.065            "params": {
00:23:40.065              "bdev_io_pool_size": 65535,
00:23:40.065              "bdev_io_cache_size": 256,
00:23:40.065              "bdev_auto_examine": true,
00:23:40.065              "iobuf_small_cache_size": 128,
00:23:40.065              "iobuf_large_cache_size": 16
00:23:40.065            }
00:23:40.065          },
00:23:40.065          {
00:23:40.065            "method": "bdev_raid_set_options",
00:23:40.065            "params": {
00:23:40.065              "process_window_size_kb": 1024,
00:23:40.065              "process_max_bandwidth_mb_sec": 0
00:23:40.065            }
00:23:40.065          },
00:23:40.065          {
00:23:40.065            "method": "bdev_iscsi_set_options",
00:23:40.065            "params": {
00:23:40.065              "timeout_sec": 30
00:23:40.065            }
00:23:40.065          },
00:23:40.065          {
00:23:40.065            "method": "bdev_nvme_set_options",
00:23:40.065            "params": {
00:23:40.065              "action_on_timeout": "none",
00:23:40.065              "timeout_us": 0,
00:23:40.065              "timeout_admin_us": 0,
00:23:40.065              "keep_alive_timeout_ms": 10000,
00:23:40.065              "arbitration_burst": 0,
00:23:40.065              "low_priority_weight": 0,
00:23:40.065              "medium_priority_weight": 0,
00:23:40.065              "high_priority_weight": 0,
00:23:40.065              "nvme_adminq_poll_period_us": 10000,
00:23:40.065              "nvme_ioq_poll_period_us": 0,
00:23:40.065              "io_queue_requests": 512,
00:23:40.065              "delay_cmd_submit": true,
00:23:40.065              "transport_retry_count": 4,
00:23:40.065              "bdev_retry_count": 3,
00:23:40.065              "transport_ack_timeout": 0,
00:23:40.065              "ctrlr_loss_timeout_sec": 0,
00:23:40.065              "reconnect_delay_sec": 0,
00:23:40.065              "fast_io_fail_timeout_sec": 0,
00:23:40.065              "disable_auto_failback": false,
00:23:40.065              "generate_uuids": false,
00:23:40.065              "transport_tos": 0,
00:23:40.065              "nvme_error_stat": false,
00:23:40.065              "rdma_srq_size": 0,
00:23:40.065              "io_path_stat": false,
00:23:40.065              "allow_accel_sequence": false,
00:23:40.065              "rdma_max_cq_size": 0,
00:23:40.065              "rdma_cm_event_timeout_ms": 0,
00:23:40.065              "dhchap_digests": [
00:23:40.065                "sha256",
00:23:40.065                "sha384",
00:23:40.065                "sha512"
00:23:40.065              ],
00:23:40.065              "dhchap_dhgroups": [
00:23:40.065                "null",
00:23:40.065                "ffdhe2048",
00:23:40.065                "ffdhe3072",
00:23:40.065                "ffdhe4096",
00:23:40.065                "ffdhe6144",
00:23:40.065                "ffdhe8192"
00:23:40.065              ]
00:23:40.065            }
00:23:40.065          },
00:23:40.065          {
00:23:40.065            "method": "bdev_nvme_attach_controller",
00:23:40.065            "params": {
00:23:40.065              "name": "TLSTEST",
00:23:40.065              "trtype": "TCP",
00:23:40.065              "adrfam": "IPv4",
00:23:40.065              "traddr": "10.0.0.2",
00:23:40.065              "trsvcid": "4420",
00:23:40.065              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:23:40.065              "prchk_reftag": false,
00:23:40.065              "prchk_guard": false,
00:23:40.065              "ctrlr_loss_timeout_sec": 0,
00:23:40.066              "reconnect_delay_sec": 0,
00:23:40.066              "fast_io_fail_timeout_sec": 0,
00:23:40.066              "psk": "key0",
00:23:40.066              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:23:40.066              "hdgst": false,
00:23:40.066              "ddgst": false,
00:23:40.066              "multipath": "multipath"
00:23:40.066            }
00:23:40.066          },
00:23:40.066          {
00:23:40.066            "method": "bdev_nvme_set_hotplug",
00:23:40.066            "params": {
00:23:40.066              "period_us": 100000,
00:23:40.066              "enable": false
00:23:40.066            }
00:23:40.066          },
00:23:40.066          {
00:23:40.066            "method": "bdev_wait_for_examine"
00:23:40.066          }
00:23:40.066        ]
00:23:40.066      },
00:23:40.066      {
00:23:40.066        "subsystem": "nbd",
00:23:40.066        "config": []
00:23:40.066      }
00:23:40.066    ]
00:23:40.066  }'
00:23:40.066   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3111701
00:23:40.066   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3111701 ']'
00:23:40.066   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3111701
00:23:40.066    00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:40.066   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:40.066    00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3111701
00:23:40.066   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:23:40.066   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:23:40.066   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3111701'
00:23:40.066  killing process with pid 3111701
00:23:40.066   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3111701
00:23:40.066  Received shutdown signal, test time was about 10.000000 seconds
00:23:40.066  
00:23:40.066                                                                                                  Latency(us)
00:23:40.066  
[2024-12-09T23:04:55.923Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:40.066  
[2024-12-09T23:04:55.923Z]  ===================================================================================================================
00:23:40.066  
[2024-12-09T23:04:55.923Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:23:40.066   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3111701
00:23:40.066   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3111307
00:23:40.066   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3111307 ']'
00:23:40.066   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3111307
00:23:40.066    00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:40.066   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:40.066    00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3111307
00:23:40.325   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:23:40.325   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:23:40.325   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3111307'
00:23:40.325  killing process with pid 3111307
00:23:40.325   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3111307
00:23:40.325   00:04:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3111307
00:23:40.325   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62
00:23:40.325   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:23:40.325   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:23:40.325    00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{
00:23:40.325    "subsystems": [
00:23:40.325      {
00:23:40.325        "subsystem": "keyring",
00:23:40.325        "config": [
00:23:40.325          {
00:23:40.325            "method": "keyring_file_add_key",
00:23:40.325            "params": {
00:23:40.325              "name": "key0",
00:23:40.325              "path": "/tmp/tmp.wYWtJyZDQX"
00:23:40.325            }
00:23:40.325          }
00:23:40.325        ]
00:23:40.325      },
00:23:40.325      {
00:23:40.325        "subsystem": "iobuf",
00:23:40.325        "config": [
00:23:40.325          {
00:23:40.325            "method": "iobuf_set_options",
00:23:40.325            "params": {
00:23:40.325              "small_pool_count": 8192,
00:23:40.325              "large_pool_count": 1024,
00:23:40.325              "small_bufsize": 8192,
00:23:40.325              "large_bufsize": 135168,
00:23:40.325              "enable_numa": false
00:23:40.325            }
00:23:40.325          }
00:23:40.325        ]
00:23:40.325      },
00:23:40.325      {
00:23:40.325        "subsystem": "sock",
00:23:40.325        "config": [
00:23:40.325          {
00:23:40.326            "method": "sock_set_default_impl",
00:23:40.326            "params": {
00:23:40.326              "impl_name": "posix"
00:23:40.326            }
00:23:40.326          },
00:23:40.326          {
00:23:40.326            "method": "sock_impl_set_options",
00:23:40.326            "params": {
00:23:40.326              "impl_name": "ssl",
00:23:40.326              "recv_buf_size": 4096,
00:23:40.326              "send_buf_size": 4096,
00:23:40.326              "enable_recv_pipe": true,
00:23:40.326              "enable_quickack": false,
00:23:40.326              "enable_placement_id": 0,
00:23:40.326              "enable_zerocopy_send_server": true,
00:23:40.326              "enable_zerocopy_send_client": false,
00:23:40.326              "zerocopy_threshold": 0,
00:23:40.326              "tls_version": 0,
00:23:40.326              "enable_ktls": false
00:23:40.326            }
00:23:40.326          },
00:23:40.326          {
00:23:40.326            "method": "sock_impl_set_options",
00:23:40.326            "params": {
00:23:40.326              "impl_name": "posix",
00:23:40.326              "recv_buf_size": 2097152,
00:23:40.326              "send_buf_size": 2097152,
00:23:40.326              "enable_recv_pipe": true,
00:23:40.326              "enable_quickack": false,
00:23:40.326              "enable_placement_id": 0,
00:23:40.326              "enable_zerocopy_send_server": true,
00:23:40.326              "enable_zerocopy_send_client": false,
00:23:40.326              "zerocopy_threshold": 0,
00:23:40.326              "tls_version": 0,
00:23:40.326              "enable_ktls": false
00:23:40.326            }
00:23:40.326          }
00:23:40.326        ]
00:23:40.326      },
00:23:40.326      {
00:23:40.326        "subsystem": "vmd",
00:23:40.326        "config": []
00:23:40.326      },
00:23:40.326      {
00:23:40.326        "subsystem": "accel",
00:23:40.326        "config": [
00:23:40.326          {
00:23:40.326            "method": "accel_set_options",
00:23:40.326            "params": {
00:23:40.326              "small_cache_size": 128,
00:23:40.326              "large_cache_size": 16,
00:23:40.326              "task_count": 2048,
00:23:40.326              "sequence_count": 2048,
00:23:40.326              "buf_count": 2048
00:23:40.326            }
00:23:40.326          }
00:23:40.326        ]
00:23:40.326      },
00:23:40.326      {
00:23:40.326        "subsystem": "bdev",
00:23:40.326        "config": [
00:23:40.326          {
00:23:40.326            "method": "bdev_set_options",
00:23:40.326            "params": {
00:23:40.326              "bdev_io_pool_size": 65535,
00:23:40.326              "bdev_io_cache_size": 256,
00:23:40.326              "bdev_auto_examine": true,
00:23:40.326              "iobuf_small_cache_size": 128,
00:23:40.326              "iobuf_large_cache_size": 16
00:23:40.326            }
00:23:40.326          },
00:23:40.326          {
00:23:40.326            "method": "bdev_raid_set_options",
00:23:40.326            "params": {
00:23:40.326              "process_window_size_kb": 1024,
00:23:40.326              "process_max_bandwidth_mb_sec": 0
00:23:40.326            }
00:23:40.326          },
00:23:40.326          {
00:23:40.326            "method": "bdev_iscsi_set_options",
00:23:40.326            "params": {
00:23:40.326              "timeout_sec": 30
00:23:40.326            }
00:23:40.326          },
00:23:40.326          {
00:23:40.326            "method": "bdev_nvme_set_options",
00:23:40.326            "params": {
00:23:40.326              "action_on_timeout": "none",
00:23:40.326              "timeout_us": 0,
00:23:40.326              "timeout_admin_us": 0,
00:23:40.326              "keep_alive_timeout_ms": 10000,
00:23:40.326              "arbitration_burst": 0,
00:23:40.326              "low_priority_weight": 0,
00:23:40.326              "medium_priority_weight": 0,
00:23:40.326              "high_priority_weight": 0,
00:23:40.326              "nvme_adminq_poll_period_us": 10000,
00:23:40.326              "nvme_ioq_poll_period_us": 0,
00:23:40.326              "io_queue_requests": 0,
00:23:40.326              "delay_cmd_submit": true,
00:23:40.326              "transport_retry_count": 4,
00:23:40.326              "bdev_retry_count": 3,
00:23:40.326              "transport_ack_timeout": 0,
00:23:40.326              "ctrlr_loss_timeout_sec": 0,
00:23:40.326              "reconnect_delay_sec": 0,
00:23:40.326              "fast_io_fail_timeout_sec": 0,
00:23:40.326              "disable_auto_failback": false,
00:23:40.326              "generate_uuids": false,
00:23:40.326              "transport_tos": 0,
00:23:40.326              "nvme_error_stat": false,
00:23:40.326              "rdma_srq_size": 0,
00:23:40.326              "io_path_stat": false,
00:23:40.326              "allow_accel_sequence": false,
00:23:40.326              "rdma_max_cq_size": 0,
00:23:40.326              "rdma_cm_event_timeout_ms": 0,
00:23:40.326              "dhchap_digests": [
00:23:40.326                "sha256",
00:23:40.326                "sha384",
00:23:40.326                "sha512"
00:23:40.326              ],
00:23:40.326              "dhchap_dhgroups": [
00:23:40.326                "null",
00:23:40.326                "ffdhe2048",
00:23:40.326                "ffdhe3072",
00:23:40.326                "ffdhe4096",
00:23:40.326                "ffdhe6144",
00:23:40.326                "ffdhe8192"
00:23:40.326              ]
00:23:40.326            }
00:23:40.326          },
00:23:40.326          {
00:23:40.326            "method": "bdev_nvme_set_hotplug",
00:23:40.326            "params": {
00:23:40.326              "period_us": 100000,
00:23:40.326              "enable": false
00:23:40.326            }
00:23:40.326          },
00:23:40.326          {
00:23:40.326            "method": "bdev_malloc_create",
00:23:40.326            "params": {
00:23:40.326              "name": "malloc0",
00:23:40.326              "num_blocks": 8192,
00:23:40.326              "block_size": 4096,
00:23:40.326              "physical_block_size": 4096,
00:23:40.326              "uuid": "2f6722c0-2873-4228-bbe1-199b57d57546",
00:23:40.326              "optimal_io_boundary": 0,
00:23:40.326              "md_size": 0,
00:23:40.326              "dif_type": 0,
00:23:40.326              "dif_is_head_of_md": false,
00:23:40.326              "dif_pi_format": 0
00:23:40.326            }
00:23:40.326          },
00:23:40.326          {
00:23:40.326            "method": "bdev_wait_for_examine"
00:23:40.326          }
00:23:40.326        ]
00:23:40.326      },
00:23:40.326      {
00:23:40.326        "subsystem": "nbd",
00:23:40.326        "config": []
00:23:40.326      },
00:23:40.326      {
00:23:40.326        "subsystem": "scheduler",
00:23:40.326        "config": [
00:23:40.326          {
00:23:40.326            "method": "framework_set_scheduler",
00:23:40.326            "params": {
00:23:40.326              "name": "static"
00:23:40.326            }
00:23:40.326          }
00:23:40.326        ]
00:23:40.326      },
00:23:40.326      {
00:23:40.326        "subsystem": "nvmf",
00:23:40.326        "config": [
00:23:40.326          {
00:23:40.326            "method": "nvmf_set_config",
00:23:40.326            "params": {
00:23:40.326              "discovery_filter": "match_any",
00:23:40.326              "admin_cmd_passthru": {
00:23:40.326                "identify_ctrlr": false
00:23:40.326              },
00:23:40.326              "dhchap_digests": [
00:23:40.326                "sha256",
00:23:40.326                "sha384",
00:23:40.326                "sha512"
00:23:40.326              ],
00:23:40.326              "dhchap_dhgroups": [
00:23:40.326                "null",
00:23:40.326                "ffdhe2048",
00:23:40.326                "ffdhe3072",
00:23:40.326                "ffdhe4096",
00:23:40.326                "ffdhe6144",
00:23:40.326                "ffdhe8192"
00:23:40.326              ]
00:23:40.326            }
00:23:40.326          },
00:23:40.326          {
00:23:40.326            "method": "nvmf_set_max_subsystems",
00:23:40.326            "params": {
00:23:40.326              "max_subsystems": 1024
00:23:40.326            }
00:23:40.326          },
00:23:40.326          {
00:23:40.326            "method": "nvmf_set_crdt",
00:23:40.326            "params": {
00:23:40.326              "crdt1": 0,
00:23:40.326              "crdt2": 0,
00:23:40.326              "crdt3": 0
00:23:40.326            }
00:23:40.326          },
00:23:40.326          {
00:23:40.326            "method": "nvmf_create_transport",
00:23:40.326            "params": {
00:23:40.326              "trtype": "TCP",
00:23:40.326              "max_queue_depth": 128,
00:23:40.326              "max_io_qpairs_per_ctrlr": 127,
00:23:40.326              "in_capsule_data_size": 4096,
00:23:40.326              "max_io_size": 131072,
00:23:40.326              "io_unit_size": 131072,
00:23:40.326              "max_aq_depth": 128,
00:23:40.326              "num_shared_buffers": 511,
00:23:40.326              "buf_cache_size": 4294967295,
00:23:40.326              "dif_insert_or_strip": false,
00:23:40.326              "zcopy": false,
00:23:40.327              "c2h_success": false,
00:23:40.327              "sock_priority": 0,
00:23:40.327              "abort_timeout_sec": 1,
00:23:40.327              "ack_timeout": 0,
00:23:40.327              "data_wr_pool_size": 0
00:23:40.327            }
00:23:40.327          },
00:23:40.327          {
00:23:40.327            "method": "nvmf_create_subsystem",
00:23:40.327            "params": {
00:23:40.327              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:40.327              "allow_any_host": false,
00:23:40.327              "serial_number": "SPDK00000000000001",
00:23:40.327              "model_number": "SPDK bdev Controller",
00:23:40.327              "max_namespaces": 10,
00:23:40.327              "min_cntlid": 1,
00:23:40.327              "max_cntlid": 65519,
00:23:40.327              "ana_reporting": false
00:23:40.327            }
00:23:40.327          },
00:23:40.327          {
00:23:40.327            "method": "nvmf_subsystem_add_host",
00:23:40.327            "params": {
00:23:40.327              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:40.327              "host": "nqn.2016-06.io.spdk:host1",
00:23:40.327              "psk": "key0"
00:23:40.327            }
00:23:40.327          },
00:23:40.327          {
00:23:40.327            "method": "nvmf_subsystem_add_ns",
00:23:40.327            "params": {
00:23:40.327              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:40.327              "namespace": {
00:23:40.327                "nsid": 1,
00:23:40.327                "bdev_name": "malloc0",
00:23:40.327                "nguid": "2F6722C028734228BBE1199B57D57546",
00:23:40.327                "uuid": "2f6722c0-2873-4228-bbe1-199b57d57546",
00:23:40.327                "no_auto_visible": false
00:23:40.327              }
00:23:40.327            }
00:23:40.327          },
00:23:40.327          {
00:23:40.327            "method": "nvmf_subsystem_add_listener",
00:23:40.327            "params": {
00:23:40.327              "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:40.327              "listen_address": {
00:23:40.327                "trtype": "TCP",
00:23:40.327                "adrfam": "IPv4",
00:23:40.327                "traddr": "10.0.0.2",
00:23:40.327                "trsvcid": "4420"
00:23:40.327              },
00:23:40.327              "secure_channel": true
00:23:40.327            }
00:23:40.327          }
00:23:40.327        ]
00:23:40.327      }
00:23:40.327    ]
00:23:40.327  }'
00:23:40.327   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:40.327   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3111942
00:23:40.327   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62
00:23:40.327   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3111942
00:23:40.327   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3111942 ']'
00:23:40.327   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:40.327   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:40.327   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:40.327  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:40.327   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:40.327   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:40.327  [2024-12-10 00:04:56.160293] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:40.327  [2024-12-10 00:04:56.160338] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:40.586  [2024-12-10 00:04:56.236244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:40.586  [2024-12-10 00:04:56.274915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:23:40.586  [2024-12-10 00:04:56.274950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:23:40.586  [2024-12-10 00:04:56.274957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:23:40.586  [2024-12-10 00:04:56.274963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:23:40.586  [2024-12-10 00:04:56.274969] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:23:40.586  [2024-12-10 00:04:56.275461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:23:40.845  [2024-12-10 00:04:56.488172] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:23:40.845  [2024-12-10 00:04:56.520202] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:23:40.845  [2024-12-10 00:04:56.520380] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:23:41.413   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:41.413   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:41.413   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:23:41.413   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:23:41.413   00:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:41.413   00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:23:41.413   00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3112076
00:23:41.413   00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3112076 /var/tmp/bdevperf.sock
00:23:41.413   00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3112076 ']'
00:23:41.413   00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:23:41.413   00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63
00:23:41.413   00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:41.413   00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:23:41.413    00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{
00:23:41.413    "subsystems": [
00:23:41.413      {
00:23:41.413        "subsystem": "keyring",
00:23:41.413        "config": [
00:23:41.413          {
00:23:41.413            "method": "keyring_file_add_key",
00:23:41.413            "params": {
00:23:41.413              "name": "key0",
00:23:41.413              "path": "/tmp/tmp.wYWtJyZDQX"
00:23:41.413            }
00:23:41.413          }
00:23:41.413        ]
00:23:41.413      },
00:23:41.413      {
00:23:41.413        "subsystem": "iobuf",
00:23:41.413        "config": [
00:23:41.413          {
00:23:41.413            "method": "iobuf_set_options",
00:23:41.413            "params": {
00:23:41.413              "small_pool_count": 8192,
00:23:41.413              "large_pool_count": 1024,
00:23:41.413              "small_bufsize": 8192,
00:23:41.413              "large_bufsize": 135168,
00:23:41.413              "enable_numa": false
00:23:41.413            }
00:23:41.413          }
00:23:41.413        ]
00:23:41.413      },
00:23:41.413      {
00:23:41.413        "subsystem": "sock",
00:23:41.413        "config": [
00:23:41.413          {
00:23:41.413            "method": "sock_set_default_impl",
00:23:41.413            "params": {
00:23:41.413              "impl_name": "posix"
00:23:41.413            }
00:23:41.413          },
00:23:41.413          {
00:23:41.413            "method": "sock_impl_set_options",
00:23:41.413            "params": {
00:23:41.413              "impl_name": "ssl",
00:23:41.413              "recv_buf_size": 4096,
00:23:41.413              "send_buf_size": 4096,
00:23:41.413              "enable_recv_pipe": true,
00:23:41.413              "enable_quickack": false,
00:23:41.413              "enable_placement_id": 0,
00:23:41.413              "enable_zerocopy_send_server": true,
00:23:41.413              "enable_zerocopy_send_client": false,
00:23:41.413              "zerocopy_threshold": 0,
00:23:41.413              "tls_version": 0,
00:23:41.414              "enable_ktls": false
00:23:41.414            }
00:23:41.414          },
00:23:41.414          {
00:23:41.414            "method": "sock_impl_set_options",
00:23:41.414            "params": {
00:23:41.414              "impl_name": "posix",
00:23:41.414              "recv_buf_size": 2097152,
00:23:41.414              "send_buf_size": 2097152,
00:23:41.414              "enable_recv_pipe": true,
00:23:41.414              "enable_quickack": false,
00:23:41.414              "enable_placement_id": 0,
00:23:41.414              "enable_zerocopy_send_server": true,
00:23:41.414              "enable_zerocopy_send_client": false,
00:23:41.414              "zerocopy_threshold": 0,
00:23:41.414              "tls_version": 0,
00:23:41.414              "enable_ktls": false
00:23:41.414            }
00:23:41.414          }
00:23:41.414        ]
00:23:41.414      },
00:23:41.414      {
00:23:41.414        "subsystem": "vmd",
00:23:41.414        "config": []
00:23:41.414      },
00:23:41.414      {
00:23:41.414        "subsystem": "accel",
00:23:41.414        "config": [
00:23:41.414          {
00:23:41.414            "method": "accel_set_options",
00:23:41.414            "params": {
00:23:41.414              "small_cache_size": 128,
00:23:41.414              "large_cache_size": 16,
00:23:41.414              "task_count": 2048,
00:23:41.414              "sequence_count": 2048,
00:23:41.414              "buf_count": 2048
00:23:41.414            }
00:23:41.414          }
00:23:41.414        ]
00:23:41.414      },
00:23:41.414      {
00:23:41.414        "subsystem": "bdev",
00:23:41.414        "config": [
00:23:41.414          {
00:23:41.414            "method": "bdev_set_options",
00:23:41.414            "params": {
00:23:41.414              "bdev_io_pool_size": 65535,
00:23:41.414              "bdev_io_cache_size": 256,
00:23:41.414              "bdev_auto_examine": true,
00:23:41.414              "iobuf_small_cache_size": 128,
00:23:41.414              "iobuf_large_cache_size": 16
00:23:41.414            }
00:23:41.414          },
00:23:41.414          {
00:23:41.414            "method": "bdev_raid_set_options",
00:23:41.414            "params": {
00:23:41.414              "process_window_size_kb": 1024,
00:23:41.414              "process_max_bandwidth_mb_sec": 0
00:23:41.414            }
00:23:41.414          },
00:23:41.414          {
00:23:41.414            "method": "bdev_iscsi_set_options",
00:23:41.414            "params": {
00:23:41.414              "timeout_sec": 30
00:23:41.414            }
00:23:41.414          },
00:23:41.414          {
00:23:41.414            "method": "bdev_nvme_set_options",
00:23:41.414            "params": {
00:23:41.414              "action_on_timeout": "none",
00:23:41.414              "timeout_us": 0,
00:23:41.414              "timeout_admin_us": 0,
00:23:41.414              "keep_alive_timeout_ms": 10000,
00:23:41.414              "arbitration_burst": 0,
00:23:41.414              "low_priority_weight": 0,
00:23:41.414              "medium_priority_weight": 0,
00:23:41.414              "high_priority_weight": 0,
00:23:41.414              "nvme_adminq_poll_period_us": 10000,
00:23:41.414              "nvme_ioq_poll_period_us": 0,
00:23:41.414              "io_queue_requests": 512,
00:23:41.414              "delay_cmd_submit": true,
00:23:41.414              "transport_retry_count": 4,
00:23:41.414              "bdev_retry_count": 3,
00:23:41.414              "transport_ack_timeout": 0,
00:23:41.414              "ctrlr_loss_timeout_sec": 0,
00:23:41.414              "reconnect_delay_sec": 0,
00:23:41.414              "fast_io_fail_timeout_sec": 0,
00:23:41.414              "disable_auto_failback": false,
00:23:41.414              "generate_uuids": false,
00:23:41.414              "transport_tos": 0,
00:23:41.414              "nvme_error_stat": false,
00:23:41.414              "rdma_srq_size": 0,
00:23:41.414              "io_path_stat": false,
00:23:41.414              "allow_accel_sequence": false,
00:23:41.414              "rdma_max_cq_size": 0,
00:23:41.414              "rdma_cm_event_timeout_ms": 0Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:23:41.414  ,
00:23:41.414              "dhchap_digests": [
00:23:41.414                "sha256",
00:23:41.414                "sha384",
00:23:41.414                "sha512"
00:23:41.414              ],
00:23:41.414              "dhchap_dhgroups": [
00:23:41.414                "null",
00:23:41.414                "ffdhe2048",
00:23:41.414                "ffdhe3072",
00:23:41.414                "ffdhe4096",
00:23:41.414                "ffdhe6144",
00:23:41.414                "ffdhe8192"
00:23:41.414              ]
00:23:41.414            }
00:23:41.414          },
00:23:41.414          {
00:23:41.414            "method": "bdev_nvme_attach_controller",
00:23:41.414            "params": {
00:23:41.414              "name": "TLSTEST",
00:23:41.414              "trtype": "TCP",
00:23:41.414              "adrfam": "IPv4",
00:23:41.414              "traddr": "10.0.0.2",
00:23:41.414              "trsvcid": "4420",
00:23:41.414              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:23:41.414              "prchk_reftag": false,
00:23:41.414              "prchk_guard": false,
00:23:41.414              "ctrlr_loss_timeout_sec": 0,
00:23:41.414              "reconnect_delay_sec": 0,
00:23:41.414              "fast_io_fail_timeout_sec": 0,
00:23:41.414              "psk": "key0",
00:23:41.414              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:23:41.414              "hdgst": false,
00:23:41.414              "ddgst": false,
00:23:41.414              "multipath": "multipath"
00:23:41.414            }
00:23:41.414          },
00:23:41.414          {
00:23:41.414            "method": "bdev_nvme_set_hotplug",
00:23:41.414            "params": {
00:23:41.414              "period_us": 100000,
00:23:41.414              "enable": false
00:23:41.414            }
00:23:41.414          },
00:23:41.414          {
00:23:41.414            "method": "bdev_wait_for_examine"
00:23:41.414          }
00:23:41.414        ]
00:23:41.414      },
00:23:41.414      {
00:23:41.414        "subsystem": "nbd",
00:23:41.414        "config": []
00:23:41.414      }
00:23:41.414    ]
00:23:41.414  }'
00:23:41.414   00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:41.414   00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:41.414  [2024-12-10 00:04:57.074889] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:41.414  [2024-12-10 00:04:57.074935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3112076 ]
00:23:41.415  [2024-12-10 00:04:57.148120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:41.415  [2024-12-10 00:04:57.189340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:23:41.674  [2024-12-10 00:04:57.342954] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:23:42.242   00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:42.242   00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:42.242   00:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests
00:23:42.242  Running I/O for 10 seconds...
00:23:44.555       5357.00 IOPS,    20.93 MiB/s
[2024-12-09T23:05:01.348Z]      5486.50 IOPS,    21.43 MiB/s
[2024-12-09T23:05:02.288Z]      5469.33 IOPS,    21.36 MiB/s
[2024-12-09T23:05:03.224Z]      5510.50 IOPS,    21.53 MiB/s
[2024-12-09T23:05:04.159Z]      5506.00 IOPS,    21.51 MiB/s
[2024-12-09T23:05:05.096Z]      5525.00 IOPS,    21.58 MiB/s
[2024-12-09T23:05:06.178Z]      5533.57 IOPS,    21.62 MiB/s
[2024-12-09T23:05:07.117Z]      5533.38 IOPS,    21.61 MiB/s
[2024-12-09T23:05:08.060Z]      5538.89 IOPS,    21.64 MiB/s
[2024-12-09T23:05:08.060Z]      5542.80 IOPS,    21.65 MiB/s
00:23:52.203                                                                                                  Latency(us)
00:23:52.203  
[2024-12-09T23:05:08.060Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:52.203  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:23:52.203  	 Verification LBA range: start 0x0 length 0x2000
00:23:52.203  	 TLSTESTn1           :      10.02    5545.81      21.66       0.00     0.00   23044.49    5086.84   36450.50
00:23:52.203  
[2024-12-09T23:05:08.060Z]  ===================================================================================================================
00:23:52.203  
[2024-12-09T23:05:08.060Z]  Total                       :               5545.81      21.66       0.00     0.00   23044.49    5086.84   36450.50
00:23:52.203  {
00:23:52.203    "results": [
00:23:52.203      {
00:23:52.203        "job": "TLSTESTn1",
00:23:52.203        "core_mask": "0x4",
00:23:52.203        "workload": "verify",
00:23:52.203        "status": "finished",
00:23:52.203        "verify_range": {
00:23:52.203          "start": 0,
00:23:52.203          "length": 8192
00:23:52.203        },
00:23:52.203        "queue_depth": 128,
00:23:52.203        "io_size": 4096,
00:23:52.203        "runtime": 10.017103,
00:23:52.203        "iops": 5545.814992618125,
00:23:52.203        "mibps": 21.66333981491455,
00:23:52.203        "io_failed": 0,
00:23:52.203        "io_timeout": 0,
00:23:52.203        "avg_latency_us": 23044.488328245956,
00:23:52.203        "min_latency_us": 5086.8419047619045,
00:23:52.203        "max_latency_us": 36450.49904761905
00:23:52.203      }
00:23:52.203    ],
00:23:52.203    "core_count": 1
00:23:52.203  }
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3112076
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3112076 ']'
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3112076
00:23:52.467    00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:52.467    00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3112076
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3112076'
00:23:52.467  killing process with pid 3112076
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3112076
00:23:52.467  Received shutdown signal, test time was about 10.000000 seconds
00:23:52.467  
00:23:52.467                                                                                                  Latency(us)
00:23:52.467  
[2024-12-09T23:05:08.324Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:52.467  
[2024-12-09T23:05:08.324Z]  ===================================================================================================================
00:23:52.467  
[2024-12-09T23:05:08.324Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3112076
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3111942
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3111942 ']'
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3111942
00:23:52.467    00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:52.467   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:52.467    00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3111942
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3111942'
00:23:52.726  killing process with pid 3111942
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3111942
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3111942
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3113982
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3113982
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3113982 ']'
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:52.726  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:52.726   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:52.726  [2024-12-10 00:05:08.548915] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:52.726  [2024-12-10 00:05:08.548962] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:52.987  [2024-12-10 00:05:08.626516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:52.987  [2024-12-10 00:05:08.665185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:23:52.987  [2024-12-10 00:05:08.665222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:23:52.987  [2024-12-10 00:05:08.665229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:23:52.987  [2024-12-10 00:05:08.665236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:23:52.987  [2024-12-10 00:05:08.665241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:23:52.987  [2024-12-10 00:05:08.665718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:52.987   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:52.987   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:52.987   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:23:52.987   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:23:52.987   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:52.987   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:23:52.987   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.wYWtJyZDQX
00:23:52.987   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.wYWtJyZDQX
00:23:52.987   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:23:53.247  [2024-12-10 00:05:08.960845] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:23:53.247   00:05:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10
00:23:53.505   00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k
00:23:53.505  [2024-12-10 00:05:09.341793] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:23:53.505  [2024-12-10 00:05:09.341977] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:23:53.771   00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0
00:23:53.771  malloc0
00:23:53.771   00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:23:54.036   00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.wYWtJyZDQX
00:23:54.294   00:05:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0
00:23:54.562   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1
00:23:54.562   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3114237
00:23:54.562   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:23:54.562   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3114237 /var/tmp/bdevperf.sock
00:23:54.562   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3114237 ']'
00:23:54.562   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:23:54.562   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:54.562   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:23:54.562  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:23:54.562   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:54.562   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:54.562  [2024-12-10 00:05:10.212254] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:54.562  [2024-12-10 00:05:10.212304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3114237 ]
00:23:54.562  [2024-12-10 00:05:10.287752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:54.562  [2024-12-10 00:05:10.327746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:23:54.821   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:54.821   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:54.821   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wYWtJyZDQX
00:23:54.821   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1
00:23:55.080  [2024-12-10 00:05:10.779703] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:23:55.080  nvme0n1
00:23:55.080   00:05:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:23:55.338  Running I/O for 1 seconds...
00:23:56.272       5443.00 IOPS,    21.26 MiB/s
00:23:56.272                                                                                                  Latency(us)
00:23:56.272  
[2024-12-09T23:05:12.129Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:56.272  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:23:56.273  	 Verification LBA range: start 0x0 length 0x2000
00:23:56.273  	 nvme0n1             :       1.01    5501.16      21.49       0.00     0.00   23111.41    5242.88   22469.49
00:23:56.273  
[2024-12-09T23:05:12.130Z]  ===================================================================================================================
00:23:56.273  
[2024-12-09T23:05:12.130Z]  Total                       :               5501.16      21.49       0.00     0.00   23111.41    5242.88   22469.49
00:23:56.273  {
00:23:56.273    "results": [
00:23:56.273      {
00:23:56.273        "job": "nvme0n1",
00:23:56.273        "core_mask": "0x2",
00:23:56.273        "workload": "verify",
00:23:56.273        "status": "finished",
00:23:56.273        "verify_range": {
00:23:56.273          "start": 0,
00:23:56.273          "length": 8192
00:23:56.273        },
00:23:56.273        "queue_depth": 128,
00:23:56.273        "io_size": 4096,
00:23:56.273        "runtime": 1.012695,
00:23:56.273        "iops": 5501.162739028039,
00:23:56.273        "mibps": 21.488916949328278,
00:23:56.273        "io_failed": 0,
00:23:56.273        "io_timeout": 0,
00:23:56.273        "avg_latency_us": 23111.409800753907,
00:23:56.273        "min_latency_us": 5242.88,
00:23:56.273        "max_latency_us": 22469.485714285714
00:23:56.273      }
00:23:56.273    ],
00:23:56.273    "core_count": 1
00:23:56.273  }
00:23:56.273   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3114237
00:23:56.273   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3114237 ']'
00:23:56.273   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3114237
00:23:56.273    00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:56.273   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:56.273    00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3114237
00:23:56.273   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:23:56.273   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:23:56.273   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3114237'
00:23:56.273  killing process with pid 3114237
00:23:56.273   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3114237
00:23:56.273  Received shutdown signal, test time was about 1.000000 seconds
00:23:56.273  
00:23:56.273                                                                                                  Latency(us)
00:23:56.273  
[2024-12-09T23:05:12.130Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:56.273  
[2024-12-09T23:05:12.130Z]  ===================================================================================================================
00:23:56.273  
[2024-12-09T23:05:12.130Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:23:56.273   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3114237
00:23:56.531   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3113982
00:23:56.531   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3113982 ']'
00:23:56.531   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3113982
00:23:56.531    00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:56.531   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:56.531    00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3113982
00:23:56.531   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:56.531   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:56.531   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3113982'
00:23:56.531  killing process with pid 3113982
00:23:56.531   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3113982
00:23:56.531   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3113982
00:23:56.790   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart
00:23:56.790   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:23:56.790   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:23:56.790   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:56.790   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3114591
00:23:56.790   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3114591
00:23:56.790   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:23:56.790   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3114591 ']'
00:23:56.790   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:56.790   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:56.790   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:56.790  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:56.790   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:56.790   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:56.790  [2024-12-10 00:05:12.485290] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:56.790  [2024-12-10 00:05:12.485340] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:56.790  [2024-12-10 00:05:12.561419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:56.790  [2024-12-10 00:05:12.600141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:23:56.790  [2024-12-10 00:05:12.600179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:23:56.790  [2024-12-10 00:05:12.600186] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:23:56.790  [2024-12-10 00:05:12.600209] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:23:56.790  [2024-12-10 00:05:12.600214] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:23:56.790  [2024-12-10 00:05:12.600713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:23:57.049   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:57.049   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:57.049   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:23:57.049   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:23:57.049   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:57.049   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:23:57.049   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd
00:23:57.049   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:57.049   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:57.049  [2024-12-10 00:05:12.732076] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:23:57.049  malloc0
00:23:57.049  [2024-12-10 00:05:12.760068] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:23:57.050  [2024-12-10 00:05:12.760262] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:23:57.050   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:57.050   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3114718
00:23:57.050   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1
00:23:57.050   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3114718 /var/tmp/bdevperf.sock
00:23:57.050   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3114718 ']'
00:23:57.050   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:23:57.050   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:57.050   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:23:57.050  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:23:57.050   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:57.050   00:05:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:57.050  [2024-12-10 00:05:12.833366] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:57.050  [2024-12-10 00:05:12.833408] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3114718 ]
00:23:57.050  [2024-12-10 00:05:12.905307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:57.308  [2024-12-10 00:05:12.944344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:23:57.308   00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:23:57.308   00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:23:57.308   00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wYWtJyZDQX
00:23:57.567   00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1
00:23:57.567  [2024-12-10 00:05:13.396242] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:23:57.826  nvme0n1
00:23:57.826   00:05:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:23:57.826  Running I/O for 1 seconds...
00:23:58.764       5562.00 IOPS,    21.73 MiB/s
00:23:58.764                                                                                                  Latency(us)
00:23:58.764  
[2024-12-09T23:05:14.621Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:58.764  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:23:58.764  	 Verification LBA range: start 0x0 length 0x2000
00:23:58.764  	 nvme0n1             :       1.02    5597.96      21.87       0.00     0.00   22700.52    7333.79   20721.86
00:23:58.764  
[2024-12-09T23:05:14.621Z]  ===================================================================================================================
00:23:58.764  
[2024-12-09T23:05:14.621Z]  Total                       :               5597.96      21.87       0.00     0.00   22700.52    7333.79   20721.86
00:23:58.764  {
00:23:58.764    "results": [
00:23:58.764      {
00:23:58.764        "job": "nvme0n1",
00:23:58.764        "core_mask": "0x2",
00:23:58.764        "workload": "verify",
00:23:58.764        "status": "finished",
00:23:58.764        "verify_range": {
00:23:58.764          "start": 0,
00:23:58.764          "length": 8192
00:23:58.764        },
00:23:58.764        "queue_depth": 128,
00:23:58.764        "io_size": 4096,
00:23:58.764        "runtime": 1.016442,
00:23:58.764        "iops": 5597.95836850504,
00:23:58.764        "mibps": 21.867024876972813,
00:23:58.764        "io_failed": 0,
00:23:58.764        "io_timeout": 0,
00:23:58.764        "avg_latency_us": 22700.523561134825,
00:23:58.764        "min_latency_us": 7333.7904761904765,
00:23:58.764        "max_latency_us": 20721.859047619047
00:23:58.764      }
00:23:58.764    ],
00:23:58.764    "core_count": 1
00:23:58.764  }
00:23:58.764    00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config
00:23:58.764    00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable
00:23:58.764    00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:59.023    00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:23:59.023   00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{
00:23:59.023  "subsystems": [
00:23:59.023  {
00:23:59.023  "subsystem": "keyring",
00:23:59.023  "config": [
00:23:59.023  {
00:23:59.023  "method": "keyring_file_add_key",
00:23:59.023  "params": {
00:23:59.023  "name": "key0",
00:23:59.023  "path": "/tmp/tmp.wYWtJyZDQX"
00:23:59.023  }
00:23:59.023  }
00:23:59.023  ]
00:23:59.023  },
00:23:59.023  {
00:23:59.023  "subsystem": "iobuf",
00:23:59.023  "config": [
00:23:59.023  {
00:23:59.023  "method": "iobuf_set_options",
00:23:59.023  "params": {
00:23:59.023  "small_pool_count": 8192,
00:23:59.023  "large_pool_count": 1024,
00:23:59.023  "small_bufsize": 8192,
00:23:59.023  "large_bufsize": 135168,
00:23:59.023  "enable_numa": false
00:23:59.023  }
00:23:59.023  }
00:23:59.023  ]
00:23:59.023  },
00:23:59.023  {
00:23:59.023  "subsystem": "sock",
00:23:59.023  "config": [
00:23:59.023  {
00:23:59.023  "method": "sock_set_default_impl",
00:23:59.023  "params": {
00:23:59.023  "impl_name": "posix"
00:23:59.023  }
00:23:59.023  },
00:23:59.023  {
00:23:59.023  "method": "sock_impl_set_options",
00:23:59.023  "params": {
00:23:59.023  "impl_name": "ssl",
00:23:59.023  "recv_buf_size": 4096,
00:23:59.023  "send_buf_size": 4096,
00:23:59.023  "enable_recv_pipe": true,
00:23:59.023  "enable_quickack": false,
00:23:59.023  "enable_placement_id": 0,
00:23:59.023  "enable_zerocopy_send_server": true,
00:23:59.023  "enable_zerocopy_send_client": false,
00:23:59.023  "zerocopy_threshold": 0,
00:23:59.023  "tls_version": 0,
00:23:59.023  "enable_ktls": false
00:23:59.023  }
00:23:59.023  },
00:23:59.023  {
00:23:59.023  "method": "sock_impl_set_options",
00:23:59.023  "params": {
00:23:59.023  "impl_name": "posix",
00:23:59.023  "recv_buf_size": 2097152,
00:23:59.023  "send_buf_size": 2097152,
00:23:59.023  "enable_recv_pipe": true,
00:23:59.023  "enable_quickack": false,
00:23:59.023  "enable_placement_id": 0,
00:23:59.023  "enable_zerocopy_send_server": true,
00:23:59.023  "enable_zerocopy_send_client": false,
00:23:59.023  "zerocopy_threshold": 0,
00:23:59.023  "tls_version": 0,
00:23:59.023  "enable_ktls": false
00:23:59.023  }
00:23:59.023  }
00:23:59.023  ]
00:23:59.023  },
00:23:59.023  {
00:23:59.023  "subsystem": "vmd",
00:23:59.023  "config": []
00:23:59.023  },
00:23:59.023  {
00:23:59.023  "subsystem": "accel",
00:23:59.023  "config": [
00:23:59.023  {
00:23:59.023  "method": "accel_set_options",
00:23:59.023  "params": {
00:23:59.023  "small_cache_size": 128,
00:23:59.023  "large_cache_size": 16,
00:23:59.023  "task_count": 2048,
00:23:59.023  "sequence_count": 2048,
00:23:59.023  "buf_count": 2048
00:23:59.023  }
00:23:59.023  }
00:23:59.023  ]
00:23:59.023  },
00:23:59.023  {
00:23:59.023  "subsystem": "bdev",
00:23:59.023  "config": [
00:23:59.023  {
00:23:59.023  "method": "bdev_set_options",
00:23:59.023  "params": {
00:23:59.023  "bdev_io_pool_size": 65535,
00:23:59.023  "bdev_io_cache_size": 256,
00:23:59.024  "bdev_auto_examine": true,
00:23:59.024  "iobuf_small_cache_size": 128,
00:23:59.024  "iobuf_large_cache_size": 16
00:23:59.024  }
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "method": "bdev_raid_set_options",
00:23:59.024  "params": {
00:23:59.024  "process_window_size_kb": 1024,
00:23:59.024  "process_max_bandwidth_mb_sec": 0
00:23:59.024  }
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "method": "bdev_iscsi_set_options",
00:23:59.024  "params": {
00:23:59.024  "timeout_sec": 30
00:23:59.024  }
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "method": "bdev_nvme_set_options",
00:23:59.024  "params": {
00:23:59.024  "action_on_timeout": "none",
00:23:59.024  "timeout_us": 0,
00:23:59.024  "timeout_admin_us": 0,
00:23:59.024  "keep_alive_timeout_ms": 10000,
00:23:59.024  "arbitration_burst": 0,
00:23:59.024  "low_priority_weight": 0,
00:23:59.024  "medium_priority_weight": 0,
00:23:59.024  "high_priority_weight": 0,
00:23:59.024  "nvme_adminq_poll_period_us": 10000,
00:23:59.024  "nvme_ioq_poll_period_us": 0,
00:23:59.024  "io_queue_requests": 0,
00:23:59.024  "delay_cmd_submit": true,
00:23:59.024  "transport_retry_count": 4,
00:23:59.024  "bdev_retry_count": 3,
00:23:59.024  "transport_ack_timeout": 0,
00:23:59.024  "ctrlr_loss_timeout_sec": 0,
00:23:59.024  "reconnect_delay_sec": 0,
00:23:59.024  "fast_io_fail_timeout_sec": 0,
00:23:59.024  "disable_auto_failback": false,
00:23:59.024  "generate_uuids": false,
00:23:59.024  "transport_tos": 0,
00:23:59.024  "nvme_error_stat": false,
00:23:59.024  "rdma_srq_size": 0,
00:23:59.024  "io_path_stat": false,
00:23:59.024  "allow_accel_sequence": false,
00:23:59.024  "rdma_max_cq_size": 0,
00:23:59.024  "rdma_cm_event_timeout_ms": 0,
00:23:59.024  "dhchap_digests": [
00:23:59.024  "sha256",
00:23:59.024  "sha384",
00:23:59.024  "sha512"
00:23:59.024  ],
00:23:59.024  "dhchap_dhgroups": [
00:23:59.024  "null",
00:23:59.024  "ffdhe2048",
00:23:59.024  "ffdhe3072",
00:23:59.024  "ffdhe4096",
00:23:59.024  "ffdhe6144",
00:23:59.024  "ffdhe8192"
00:23:59.024  ]
00:23:59.024  }
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "method": "bdev_nvme_set_hotplug",
00:23:59.024  "params": {
00:23:59.024  "period_us": 100000,
00:23:59.024  "enable": false
00:23:59.024  }
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "method": "bdev_malloc_create",
00:23:59.024  "params": {
00:23:59.024  "name": "malloc0",
00:23:59.024  "num_blocks": 8192,
00:23:59.024  "block_size": 4096,
00:23:59.024  "physical_block_size": 4096,
00:23:59.024  "uuid": "9c3d70b6-3a8a-4df5-b4e6-2b4bccd06333",
00:23:59.024  "optimal_io_boundary": 0,
00:23:59.024  "md_size": 0,
00:23:59.024  "dif_type": 0,
00:23:59.024  "dif_is_head_of_md": false,
00:23:59.024  "dif_pi_format": 0
00:23:59.024  }
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "method": "bdev_wait_for_examine"
00:23:59.024  }
00:23:59.024  ]
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "subsystem": "nbd",
00:23:59.024  "config": []
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "subsystem": "scheduler",
00:23:59.024  "config": [
00:23:59.024  {
00:23:59.024  "method": "framework_set_scheduler",
00:23:59.024  "params": {
00:23:59.024  "name": "static"
00:23:59.024  }
00:23:59.024  }
00:23:59.024  ]
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "subsystem": "nvmf",
00:23:59.024  "config": [
00:23:59.024  {
00:23:59.024  "method": "nvmf_set_config",
00:23:59.024  "params": {
00:23:59.024  "discovery_filter": "match_any",
00:23:59.024  "admin_cmd_passthru": {
00:23:59.024  "identify_ctrlr": false
00:23:59.024  },
00:23:59.024  "dhchap_digests": [
00:23:59.024  "sha256",
00:23:59.024  "sha384",
00:23:59.024  "sha512"
00:23:59.024  ],
00:23:59.024  "dhchap_dhgroups": [
00:23:59.024  "null",
00:23:59.024  "ffdhe2048",
00:23:59.024  "ffdhe3072",
00:23:59.024  "ffdhe4096",
00:23:59.024  "ffdhe6144",
00:23:59.024  "ffdhe8192"
00:23:59.024  ]
00:23:59.024  }
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "method": "nvmf_set_max_subsystems",
00:23:59.024  "params": {
00:23:59.024  "max_subsystems": 1024
00:23:59.024  }
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "method": "nvmf_set_crdt",
00:23:59.024  "params": {
00:23:59.024  "crdt1": 0,
00:23:59.024  "crdt2": 0,
00:23:59.024  "crdt3": 0
00:23:59.024  }
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "method": "nvmf_create_transport",
00:23:59.024  "params": {
00:23:59.024  "trtype": "TCP",
00:23:59.024  "max_queue_depth": 128,
00:23:59.024  "max_io_qpairs_per_ctrlr": 127,
00:23:59.024  "in_capsule_data_size": 4096,
00:23:59.024  "max_io_size": 131072,
00:23:59.024  "io_unit_size": 131072,
00:23:59.024  "max_aq_depth": 128,
00:23:59.024  "num_shared_buffers": 511,
00:23:59.024  "buf_cache_size": 4294967295,
00:23:59.024  "dif_insert_or_strip": false,
00:23:59.024  "zcopy": false,
00:23:59.024  "c2h_success": false,
00:23:59.024  "sock_priority": 0,
00:23:59.024  "abort_timeout_sec": 1,
00:23:59.024  "ack_timeout": 0,
00:23:59.024  "data_wr_pool_size": 0
00:23:59.024  }
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "method": "nvmf_create_subsystem",
00:23:59.024  "params": {
00:23:59.024  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:59.024  "allow_any_host": false,
00:23:59.024  "serial_number": "00000000000000000000",
00:23:59.024  "model_number": "SPDK bdev Controller",
00:23:59.024  "max_namespaces": 32,
00:23:59.024  "min_cntlid": 1,
00:23:59.024  "max_cntlid": 65519,
00:23:59.024  "ana_reporting": false
00:23:59.024  }
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "method": "nvmf_subsystem_add_host",
00:23:59.024  "params": {
00:23:59.024  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:59.024  "host": "nqn.2016-06.io.spdk:host1",
00:23:59.024  "psk": "key0"
00:23:59.024  }
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "method": "nvmf_subsystem_add_ns",
00:23:59.024  "params": {
00:23:59.024  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:59.024  "namespace": {
00:23:59.024  "nsid": 1,
00:23:59.024  "bdev_name": "malloc0",
00:23:59.024  "nguid": "9C3D70B63A8A4DF5B4E62B4BCCD06333",
00:23:59.024  "uuid": "9c3d70b6-3a8a-4df5-b4e6-2b4bccd06333",
00:23:59.024  "no_auto_visible": false
00:23:59.024  }
00:23:59.024  }
00:23:59.024  },
00:23:59.024  {
00:23:59.024  "method": "nvmf_subsystem_add_listener",
00:23:59.024  "params": {
00:23:59.024  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:59.024  "listen_address": {
00:23:59.024  "trtype": "TCP",
00:23:59.024  "adrfam": "IPv4",
00:23:59.024  "traddr": "10.0.0.2",
00:23:59.024  "trsvcid": "4420"
00:23:59.024  },
00:23:59.024  "secure_channel": false,
00:23:59.024  "sock_impl": "ssl"
00:23:59.024  }
00:23:59.024  }
00:23:59.024  ]
00:23:59.024  }
00:23:59.024  ]
00:23:59.024  }'
00:23:59.024    00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config
00:23:59.284   00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{
00:23:59.284    "subsystems": [
00:23:59.284      {
00:23:59.284        "subsystem": "keyring",
00:23:59.284        "config": [
00:23:59.284          {
00:23:59.284            "method": "keyring_file_add_key",
00:23:59.284            "params": {
00:23:59.284              "name": "key0",
00:23:59.284              "path": "/tmp/tmp.wYWtJyZDQX"
00:23:59.284            }
00:23:59.284          }
00:23:59.284        ]
00:23:59.284      },
00:23:59.284      {
00:23:59.284        "subsystem": "iobuf",
00:23:59.284        "config": [
00:23:59.284          {
00:23:59.284            "method": "iobuf_set_options",
00:23:59.284            "params": {
00:23:59.284              "small_pool_count": 8192,
00:23:59.284              "large_pool_count": 1024,
00:23:59.284              "small_bufsize": 8192,
00:23:59.284              "large_bufsize": 135168,
00:23:59.284              "enable_numa": false
00:23:59.284            }
00:23:59.284          }
00:23:59.284        ]
00:23:59.284      },
00:23:59.284      {
00:23:59.284        "subsystem": "sock",
00:23:59.284        "config": [
00:23:59.284          {
00:23:59.284            "method": "sock_set_default_impl",
00:23:59.284            "params": {
00:23:59.284              "impl_name": "posix"
00:23:59.284            }
00:23:59.284          },
00:23:59.284          {
00:23:59.284            "method": "sock_impl_set_options",
00:23:59.284            "params": {
00:23:59.284              "impl_name": "ssl",
00:23:59.284              "recv_buf_size": 4096,
00:23:59.284              "send_buf_size": 4096,
00:23:59.284              "enable_recv_pipe": true,
00:23:59.284              "enable_quickack": false,
00:23:59.284              "enable_placement_id": 0,
00:23:59.284              "enable_zerocopy_send_server": true,
00:23:59.284              "enable_zerocopy_send_client": false,
00:23:59.284              "zerocopy_threshold": 0,
00:23:59.284              "tls_version": 0,
00:23:59.284              "enable_ktls": false
00:23:59.284            }
00:23:59.284          },
00:23:59.284          {
00:23:59.284            "method": "sock_impl_set_options",
00:23:59.284            "params": {
00:23:59.284              "impl_name": "posix",
00:23:59.284              "recv_buf_size": 2097152,
00:23:59.284              "send_buf_size": 2097152,
00:23:59.284              "enable_recv_pipe": true,
00:23:59.284              "enable_quickack": false,
00:23:59.284              "enable_placement_id": 0,
00:23:59.284              "enable_zerocopy_send_server": true,
00:23:59.285              "enable_zerocopy_send_client": false,
00:23:59.285              "zerocopy_threshold": 0,
00:23:59.285              "tls_version": 0,
00:23:59.285              "enable_ktls": false
00:23:59.285            }
00:23:59.285          }
00:23:59.285        ]
00:23:59.285      },
00:23:59.285      {
00:23:59.285        "subsystem": "vmd",
00:23:59.285        "config": []
00:23:59.285      },
00:23:59.285      {
00:23:59.285        "subsystem": "accel",
00:23:59.285        "config": [
00:23:59.285          {
00:23:59.285            "method": "accel_set_options",
00:23:59.285            "params": {
00:23:59.285              "small_cache_size": 128,
00:23:59.285              "large_cache_size": 16,
00:23:59.285              "task_count": 2048,
00:23:59.285              "sequence_count": 2048,
00:23:59.285              "buf_count": 2048
00:23:59.285            }
00:23:59.285          }
00:23:59.285        ]
00:23:59.285      },
00:23:59.285      {
00:23:59.285        "subsystem": "bdev",
00:23:59.285        "config": [
00:23:59.285          {
00:23:59.285            "method": "bdev_set_options",
00:23:59.285            "params": {
00:23:59.285              "bdev_io_pool_size": 65535,
00:23:59.285              "bdev_io_cache_size": 256,
00:23:59.285              "bdev_auto_examine": true,
00:23:59.285              "iobuf_small_cache_size": 128,
00:23:59.285              "iobuf_large_cache_size": 16
00:23:59.285            }
00:23:59.285          },
00:23:59.285          {
00:23:59.285            "method": "bdev_raid_set_options",
00:23:59.285            "params": {
00:23:59.285              "process_window_size_kb": 1024,
00:23:59.285              "process_max_bandwidth_mb_sec": 0
00:23:59.285            }
00:23:59.285          },
00:23:59.285          {
00:23:59.285            "method": "bdev_iscsi_set_options",
00:23:59.285            "params": {
00:23:59.285              "timeout_sec": 30
00:23:59.285            }
00:23:59.285          },
00:23:59.285          {
00:23:59.285            "method": "bdev_nvme_set_options",
00:23:59.285            "params": {
00:23:59.285              "action_on_timeout": "none",
00:23:59.285              "timeout_us": 0,
00:23:59.285              "timeout_admin_us": 0,
00:23:59.285              "keep_alive_timeout_ms": 10000,
00:23:59.285              "arbitration_burst": 0,
00:23:59.285              "low_priority_weight": 0,
00:23:59.285              "medium_priority_weight": 0,
00:23:59.285              "high_priority_weight": 0,
00:23:59.285              "nvme_adminq_poll_period_us": 10000,
00:23:59.285              "nvme_ioq_poll_period_us": 0,
00:23:59.285              "io_queue_requests": 512,
00:23:59.285              "delay_cmd_submit": true,
00:23:59.285              "transport_retry_count": 4,
00:23:59.285              "bdev_retry_count": 3,
00:23:59.285              "transport_ack_timeout": 0,
00:23:59.285              "ctrlr_loss_timeout_sec": 0,
00:23:59.285              "reconnect_delay_sec": 0,
00:23:59.285              "fast_io_fail_timeout_sec": 0,
00:23:59.285              "disable_auto_failback": false,
00:23:59.285              "generate_uuids": false,
00:23:59.285              "transport_tos": 0,
00:23:59.285              "nvme_error_stat": false,
00:23:59.285              "rdma_srq_size": 0,
00:23:59.285              "io_path_stat": false,
00:23:59.285              "allow_accel_sequence": false,
00:23:59.285              "rdma_max_cq_size": 0,
00:23:59.285              "rdma_cm_event_timeout_ms": 0,
00:23:59.285              "dhchap_digests": [
00:23:59.285                "sha256",
00:23:59.285                "sha384",
00:23:59.285                "sha512"
00:23:59.285              ],
00:23:59.285              "dhchap_dhgroups": [
00:23:59.285                "null",
00:23:59.285                "ffdhe2048",
00:23:59.285                "ffdhe3072",
00:23:59.285                "ffdhe4096",
00:23:59.285                "ffdhe6144",
00:23:59.285                "ffdhe8192"
00:23:59.285              ]
00:23:59.285            }
00:23:59.285          },
00:23:59.285          {
00:23:59.285            "method": "bdev_nvme_attach_controller",
00:23:59.285            "params": {
00:23:59.285              "name": "nvme0",
00:23:59.285              "trtype": "TCP",
00:23:59.285              "adrfam": "IPv4",
00:23:59.285              "traddr": "10.0.0.2",
00:23:59.285              "trsvcid": "4420",
00:23:59.285              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:23:59.285              "prchk_reftag": false,
00:23:59.285              "prchk_guard": false,
00:23:59.285              "ctrlr_loss_timeout_sec": 0,
00:23:59.285              "reconnect_delay_sec": 0,
00:23:59.285              "fast_io_fail_timeout_sec": 0,
00:23:59.285              "psk": "key0",
00:23:59.285              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:23:59.285              "hdgst": false,
00:23:59.285              "ddgst": false,
00:23:59.285              "multipath": "multipath"
00:23:59.285            }
00:23:59.285          },
00:23:59.285          {
00:23:59.285            "method": "bdev_nvme_set_hotplug",
00:23:59.285            "params": {
00:23:59.285              "period_us": 100000,
00:23:59.285              "enable": false
00:23:59.285            }
00:23:59.285          },
00:23:59.285          {
00:23:59.285            "method": "bdev_enable_histogram",
00:23:59.285            "params": {
00:23:59.285              "name": "nvme0n1",
00:23:59.285              "enable": true
00:23:59.285            }
00:23:59.285          },
00:23:59.285          {
00:23:59.285            "method": "bdev_wait_for_examine"
00:23:59.285          }
00:23:59.285        ]
00:23:59.285      },
00:23:59.285      {
00:23:59.285        "subsystem": "nbd",
00:23:59.285        "config": []
00:23:59.285      }
00:23:59.285    ]
00:23:59.285  }'
00:23:59.285   00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3114718
00:23:59.285   00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3114718 ']'
00:23:59.285   00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3114718
00:23:59.285    00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:59.285   00:05:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:59.285    00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3114718
00:23:59.285   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:23:59.285   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:23:59.285   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3114718'
00:23:59.285  killing process with pid 3114718
00:23:59.285   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3114718
00:23:59.285  Received shutdown signal, test time was about 1.000000 seconds
00:23:59.285  
00:23:59.285                                                                                                  Latency(us)
00:23:59.285  
[2024-12-09T23:05:15.142Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:23:59.285  
[2024-12-09T23:05:15.142Z]  ===================================================================================================================
00:23:59.285  
[2024-12-09T23:05:15.142Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:23:59.285   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3114718
00:23:59.544   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3114591
00:23:59.544   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3114591 ']'
00:23:59.544   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3114591
00:23:59.544    00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:23:59.544   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:23:59.544    00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3114591
00:23:59.544   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:23:59.544   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:23:59.544   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3114591'
00:23:59.544  killing process with pid 3114591
00:23:59.544   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3114591
00:23:59.544   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3114591
00:23:59.803   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62
00:23:59.803   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:23:59.803    00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{
00:23:59.803  "subsystems": [
00:23:59.803  {
00:23:59.803  "subsystem": "keyring",
00:23:59.803  "config": [
00:23:59.803  {
00:23:59.803  "method": "keyring_file_add_key",
00:23:59.803  "params": {
00:23:59.803  "name": "key0",
00:23:59.803  "path": "/tmp/tmp.wYWtJyZDQX"
00:23:59.803  }
00:23:59.803  }
00:23:59.803  ]
00:23:59.803  },
00:23:59.803  {
00:23:59.803  "subsystem": "iobuf",
00:23:59.803  "config": [
00:23:59.803  {
00:23:59.803  "method": "iobuf_set_options",
00:23:59.803  "params": {
00:23:59.803  "small_pool_count": 8192,
00:23:59.803  "large_pool_count": 1024,
00:23:59.803  "small_bufsize": 8192,
00:23:59.803  "large_bufsize": 135168,
00:23:59.803  "enable_numa": false
00:23:59.803  }
00:23:59.803  }
00:23:59.803  ]
00:23:59.803  },
00:23:59.803  {
00:23:59.803  "subsystem": "sock",
00:23:59.803  "config": [
00:23:59.803  {
00:23:59.803  "method": "sock_set_default_impl",
00:23:59.803  "params": {
00:23:59.803  "impl_name": "posix"
00:23:59.803  }
00:23:59.803  },
00:23:59.803  {
00:23:59.803  "method": "sock_impl_set_options",
00:23:59.803  "params": {
00:23:59.803  "impl_name": "ssl",
00:23:59.803  "recv_buf_size": 4096,
00:23:59.803  "send_buf_size": 4096,
00:23:59.803  "enable_recv_pipe": true,
00:23:59.803  "enable_quickack": false,
00:23:59.803  "enable_placement_id": 0,
00:23:59.803  "enable_zerocopy_send_server": true,
00:23:59.803  "enable_zerocopy_send_client": false,
00:23:59.803  "zerocopy_threshold": 0,
00:23:59.803  "tls_version": 0,
00:23:59.803  "enable_ktls": false
00:23:59.803  }
00:23:59.803  },
00:23:59.803  {
00:23:59.803  "method": "sock_impl_set_options",
00:23:59.803  "params": {
00:23:59.803  "impl_name": "posix",
00:23:59.803  "recv_buf_size": 2097152,
00:23:59.803  "send_buf_size": 2097152,
00:23:59.803  "enable_recv_pipe": true,
00:23:59.803  "enable_quickack": false,
00:23:59.803  "enable_placement_id": 0,
00:23:59.803  "enable_zerocopy_send_server": true,
00:23:59.803  "enable_zerocopy_send_client": false,
00:23:59.803  "zerocopy_threshold": 0,
00:23:59.803  "tls_version": 0,
00:23:59.803  "enable_ktls": false
00:23:59.803  }
00:23:59.804  }
00:23:59.804  ]
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "subsystem": "vmd",
00:23:59.804  "config": []
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "subsystem": "accel",
00:23:59.804  "config": [
00:23:59.804  {
00:23:59.804  "method": "accel_set_options",
00:23:59.804  "params": {
00:23:59.804  "small_cache_size": 128,
00:23:59.804  "large_cache_size": 16,
00:23:59.804  "task_count": 2048,
00:23:59.804  "sequence_count": 2048,
00:23:59.804  "buf_count": 2048
00:23:59.804  }
00:23:59.804  }
00:23:59.804  ]
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "subsystem": "bdev",
00:23:59.804  "config": [
00:23:59.804  {
00:23:59.804  "method": "bdev_set_options",
00:23:59.804  "params": {
00:23:59.804  "bdev_io_pool_size": 65535,
00:23:59.804  "bdev_io_cache_size": 256,
00:23:59.804  "bdev_auto_examine": true,
00:23:59.804  "iobuf_small_cache_size": 128,
00:23:59.804  "iobuf_large_cache_size": 16
00:23:59.804  }
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "method": "bdev_raid_set_options",
00:23:59.804  "params": {
00:23:59.804  "process_window_size_kb": 1024,
00:23:59.804  "process_max_bandwidth_mb_sec": 0
00:23:59.804  }
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "method": "bdev_iscsi_set_options",
00:23:59.804  "params": {
00:23:59.804  "timeout_sec": 30
00:23:59.804  }
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "method": "bdev_nvme_set_options",
00:23:59.804  "params": {
00:23:59.804  "action_on_timeout": "none",
00:23:59.804  "timeout_us": 0,
00:23:59.804  "timeout_admin_us": 0,
00:23:59.804  "keep_alive_timeout_ms": 10000,
00:23:59.804  "arbitration_burst": 0,
00:23:59.804  "low_priority_weight": 0,
00:23:59.804  "medium_priority_weight": 0,
00:23:59.804  "high_priority_weight": 0,
00:23:59.804  "nvme_adminq_poll_period_us": 10000,
00:23:59.804  "nvme_ioq_poll_period_us": 0,
00:23:59.804  "io_queue_requests": 0,
00:23:59.804  "delay_cmd_submit": true,
00:23:59.804  "transport_retry_count": 4,
00:23:59.804  "bdev_retry_count": 3,
00:23:59.804  "transport_ack_timeout": 0,
00:23:59.804  "ctrlr_loss_timeout_sec": 0,
00:23:59.804  "reconnect_delay_sec": 0,
00:23:59.804  "fast_io_fail_timeout_sec": 0,
00:23:59.804  "disable_auto_failback": false,
00:23:59.804  "generate_uuids": false,
00:23:59.804  "transport_tos": 0,
00:23:59.804  "nvme_error_stat": false,
00:23:59.804  "rdma_srq_size": 0,
00:23:59.804  "io_path_stat": false,
00:23:59.804  "allow_accel_sequence": false,
00:23:59.804  "rdma_max_cq_size": 0,
00:23:59.804  "rdma_cm_event_timeout_ms": 0,
00:23:59.804  "dhchap_digests": [
00:23:59.804  "sha256",
00:23:59.804  "sha384",
00:23:59.804  "sha512"
00:23:59.804  ],
00:23:59.804  "dhchap_dhgroups": [
00:23:59.804  "null",
00:23:59.804  "ffdhe2048",
00:23:59.804  "ffdhe3072",
00:23:59.804  "ffdhe4096",
00:23:59.804  "ffdhe6144",
00:23:59.804  "ffdhe8192"
00:23:59.804  ]
00:23:59.804  }
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "method": "bdev_nvme_set_hotplug",
00:23:59.804  "params": {
00:23:59.804  "period_us": 100000,
00:23:59.804  "enable": false
00:23:59.804  }
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "method": "bdev_malloc_create",
00:23:59.804  "params": {
00:23:59.804  "name": "malloc0",
00:23:59.804  "num_blocks": 8192,
00:23:59.804  "block_size": 4096,
00:23:59.804  "physical_block_size": 4096,
00:23:59.804  "uuid": "9c3d70b6-3a8a-4df5-b4e6-2b4bccd06333",
00:23:59.804  "optimal_io_boundary": 0,
00:23:59.804  "md_size": 0,
00:23:59.804  "dif_type": 0,
00:23:59.804  "dif_is_head_of_md": false,
00:23:59.804  "dif_pi_format": 0
00:23:59.804  }
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "method": "bdev_wait_for_examine"
00:23:59.804  }
00:23:59.804  ]
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "subsystem": "nbd",
00:23:59.804  "config": []
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "subsystem": "scheduler",
00:23:59.804  "config": [
00:23:59.804  {
00:23:59.804  "method": "framework_set_scheduler",
00:23:59.804  "params": {
00:23:59.804  "name": "static"
00:23:59.804  }
00:23:59.804  }
00:23:59.804  ]
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "subsystem": "nvmf",
00:23:59.804  "config": [
00:23:59.804  {
00:23:59.804  "method": "nvmf_set_config",
00:23:59.804  "params": {
00:23:59.804  "discovery_filter": "match_any",
00:23:59.804  "admin_cmd_passthru": {
00:23:59.804  "identify_ctrlr": false
00:23:59.804  },
00:23:59.804  "dhchap_digests": [
00:23:59.804  "sha256",
00:23:59.804  "sha384",
00:23:59.804  "sha512"
00:23:59.804  ],
00:23:59.804  "dhchap_dhgroups": [
00:23:59.804  "null",
00:23:59.804  "ffdhe2048",
00:23:59.804  "ffdhe3072",
00:23:59.804  "ffdhe4096",
00:23:59.804  "ffdhe6144",
00:23:59.804  "ffdhe8192"
00:23:59.804  ]
00:23:59.804  }
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "method": "nvmf_set_max_subsystems",
00:23:59.804  "params": {
00:23:59.804  "max_subsystems": 1024
00:23:59.804  }
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "method": "nvmf_set_crdt",
00:23:59.804  "params": {
00:23:59.804  "crdt1": 0,
00:23:59.804  "crdt2": 0,
00:23:59.804  "crdt3": 0
00:23:59.804  }
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "method": "nvmf_create_transport",
00:23:59.804  "params": {
00:23:59.804  "trtype": "TCP",
00:23:59.804  "max_queue_depth": 128,
00:23:59.804  "max_io_qpairs_per_ctrlr": 127,
00:23:59.804  "in_capsule_data_size": 4096,
00:23:59.804  "max_io_size": 131072,
00:23:59.804  "io_unit_size": 131072,
00:23:59.804  "max_aq_depth": 128,
00:23:59.804  "num_shared_buffers": 511,
00:23:59.804  "buf_cache_size": 4294967295,
00:23:59.804  "dif_insert_or_strip": false,
00:23:59.804  "zcopy": false,
00:23:59.804  "c2h_success": false,
00:23:59.804  "sock_priority": 0,
00:23:59.804  "abort_timeout_sec": 1,
00:23:59.804  "ack_timeout": 0,
00:23:59.804  "data_wr_pool_size": 0
00:23:59.804  }
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "method": "nvmf_create_subsystem",
00:23:59.804  "params": {
00:23:59.804  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:59.804  "allow_any_host": false,
00:23:59.804  "serial_number": "00000000000000000000",
00:23:59.804  "model_number": "SPDK bdev Controller",
00:23:59.804  "max_namespaces": 32,
00:23:59.804  "min_cntlid": 1,
00:23:59.804  "max_cntlid": 65519,
00:23:59.804  "ana_reporting": false
00:23:59.804  }
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "method": "nvmf_subsystem_add_host",
00:23:59.804  "params": {
00:23:59.804  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:59.804  "host": "nqn.2016-06.io.spdk:host1",
00:23:59.804  "psk": "key0"
00:23:59.804  }
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "method": "nvmf_subsystem_add_ns",
00:23:59.804  "params": {
00:23:59.804  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:59.804  "namespace": {
00:23:59.804  "nsid": 1,
00:23:59.804  "bdev_name": "malloc0",
00:23:59.804  "nguid": "9C3D70B63A8A4DF5B4E62B4BCCD06333",
00:23:59.804  "uuid": "9c3d70b6-3a8a-4df5-b4e6-2b4bccd06333",
00:23:59.804  "no_auto_visible": false
00:23:59.804  }
00:23:59.804  }
00:23:59.804  },
00:23:59.804  {
00:23:59.804  "method": "nvmf_subsystem_add_listener",
00:23:59.804  "params": {
00:23:59.804  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:23:59.804  "listen_address": {
00:23:59.804  "trtype": "TCP",
00:23:59.804  "adrfam": "IPv4",
00:23:59.804  "traddr": "10.0.0.2",
00:23:59.804  "trsvcid": "4420"
00:23:59.804  },
00:23:59.804  "secure_channel": false,
00:23:59.804  "sock_impl": "ssl"
00:23:59.804  }
00:23:59.804  }
00:23:59.804  ]
00:23:59.804  }
00:23:59.804  ]
00:23:59.804  }'
00:23:59.804   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable
00:23:59.804   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:59.804   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3115171
00:23:59.804   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62
00:23:59.804   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3115171
00:23:59.804   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3115171 ']'
00:23:59.804   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:23:59.804   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:23:59.804   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:23:59.805  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:23:59.805   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:23:59.805   00:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:23:59.805  [2024-12-10 00:05:15.468669] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:23:59.805  [2024-12-10 00:05:15.468716] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:23:59.805  [2024-12-10 00:05:15.546466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:23:59.805  [2024-12-10 00:05:15.580890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:23:59.805  [2024-12-10 00:05:15.580926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:23:59.805  [2024-12-10 00:05:15.580933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:23:59.805  [2024-12-10 00:05:15.580939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:23:59.805  [2024-12-10 00:05:15.580943] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:23:59.805  [2024-12-10 00:05:15.581493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:24:00.064  [2024-12-10 00:05:15.794863] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:24:00.064  [2024-12-10 00:05:15.826900] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:24:00.064  [2024-12-10 00:05:15.827091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:24:00.632   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:00.632   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:24:00.632   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:24:00.632   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable
00:24:00.632   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:24:00.632   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:24:00.632   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3115208
00:24:00.632   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3115208 /var/tmp/bdevperf.sock
00:24:00.632   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3115208 ']'
00:24:00.632   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63
00:24:00.632   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:24:00.632   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100
00:24:00.632    00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{
00:24:00.632    "subsystems": [
00:24:00.632      {
00:24:00.632        "subsystem": "keyring",
00:24:00.632        "config": [
00:24:00.632          {
00:24:00.632            "method": "keyring_file_add_key",
00:24:00.632            "params": {
00:24:00.632              "name": "key0",
00:24:00.632              "path": "/tmp/tmp.wYWtJyZDQX"
00:24:00.632            }
00:24:00.632          }
00:24:00.632        ]
00:24:00.632      },
00:24:00.632      {
00:24:00.632        "subsystem": "iobuf",
00:24:00.632        "config": [
00:24:00.632          {
00:24:00.632            "method": "iobuf_set_options",
00:24:00.632            "params": {
00:24:00.632              "small_pool_count": 8192,
00:24:00.632              "large_pool_count": 1024,
00:24:00.632              "small_bufsize": 8192,
00:24:00.632              "large_bufsize": 135168,
00:24:00.632              "enable_numa": false
00:24:00.632            }
00:24:00.632          }
00:24:00.632        ]
00:24:00.632      },
00:24:00.632      {
00:24:00.632        "subsystem": "sock",
00:24:00.632        "config": [
00:24:00.632          {
00:24:00.632            "method": "sock_set_default_impl",
00:24:00.632            "params": {
00:24:00.632              "impl_name": "posix"
00:24:00.632            }
00:24:00.632          },
00:24:00.632          {
00:24:00.632            "method": "sock_impl_set_options",
00:24:00.632            "params": {
00:24:00.632              "impl_name": "ssl",
00:24:00.632              "recv_buf_size": 4096,
00:24:00.632              "send_buf_size": 4096,
00:24:00.632              "enable_recv_pipe": true,
00:24:00.632              "enable_quickack": false,
00:24:00.632              "enable_placement_id": 0,
00:24:00.632              "enable_zerocopy_send_server": true,
00:24:00.632              "enable_zerocopy_send_client": false,
00:24:00.632              "zerocopy_threshold": 0,
00:24:00.632              "tls_version": 0,
00:24:00.632              "enable_ktls": false
00:24:00.632            }
00:24:00.632          },
00:24:00.632          {
00:24:00.632            "method": "sock_impl_set_options",
00:24:00.632            "params": {
00:24:00.632              "impl_name": "posix",
00:24:00.632              "recv_buf_size": 2097152,
00:24:00.632              "send_buf_size": 2097152,
00:24:00.632              "enable_recv_pipe": true,
00:24:00.632              "enable_quickack": false,
00:24:00.632              "enable_placement_id": 0,
00:24:00.632              "enable_zerocopy_send_server": true,
00:24:00.632              "enable_zerocopy_send_client": false,
00:24:00.632              "zerocopy_threshold": 0,
00:24:00.632              "tls_version": 0,
00:24:00.632              "enable_ktls": false
00:24:00.632            }
00:24:00.632          }
00:24:00.632        ]
00:24:00.632      },
00:24:00.632      {
00:24:00.632        "subsystem": "vmd",
00:24:00.632        "config": []
00:24:00.632      },
00:24:00.632      {
00:24:00.632        "subsystem": "accel",
00:24:00.632        "config": [
00:24:00.632          {
00:24:00.632            "method": "accel_set_options",
00:24:00.632            "params": {
00:24:00.632              "small_cache_size": 128,
00:24:00.632              "large_cache_size": 16,
00:24:00.632              "task_count": 2048,
00:24:00.632              "sequence_count": 2048,
00:24:00.632              "buf_count": 2048
00:24:00.632            }
00:24:00.632          }
00:24:00.632        ]
00:24:00.632      },
00:24:00.632      {
00:24:00.632        "subsystem": "bdev",
00:24:00.632        "config": [
00:24:00.632          {
00:24:00.632            "method": "bdev_set_options",
00:24:00.632            "params": {
00:24:00.632              "bdev_io_pool_size": 65535,
00:24:00.632              "bdev_io_cache_size": 256,
00:24:00.632              "bdev_auto_examine": true,
00:24:00.632              "iobuf_small_cache_size": 128,
00:24:00.632              "iobuf_large_cache_size": 16
00:24:00.632            }
00:24:00.632          },
00:24:00.632          {
00:24:00.632            "method": "bdev_raid_set_options",
00:24:00.632            "params": {
00:24:00.632              "process_window_size_kb": 1024,
00:24:00.632              "process_max_bandwidth_mb_sec": 0
00:24:00.632            }
00:24:00.632          },
00:24:00.632          {
00:24:00.632            "method": "bdev_iscsi_set_options",
00:24:00.632            "params": {
00:24:00.632              "timeout_sec": 30
00:24:00.632            }
00:24:00.632          },
00:24:00.632          {
00:24:00.632            "method": "bdev_nvme_set_options",
00:24:00.632            "params": {
00:24:00.632              "action_on_timeout": "none",
00:24:00.632              "timeout_us": 0,
00:24:00.632              "timeout_admin_us": 0,
00:24:00.632              "keep_alive_timeout_ms": 10000,
00:24:00.632              "arbitration_burst": 0,
00:24:00.632              "low_priority_weight": 0,
00:24:00.632              "medium_priority_weight": 0,
00:24:00.632              "high_priority_weight": 0,
00:24:00.632              "nvme_adminq_poll_period_us": 10000,
00:24:00.632              "nvme_ioq_poll_period_us": 0,
00:24:00.632              "io_queue_requests": 512,
00:24:00.632              "delay_cmd_submit": true,
00:24:00.632              "transport_retry_count": 4,
00:24:00.632              "bdev_retry_count": 3,
00:24:00.632              "transport_ack_timeout": 0,
00:24:00.632              "ctrlr_loss_timeout_sec": 0,
00:24:00.632              "reconnect_delay_sec": 0,
00:24:00.632              "fast_io_fail_timeout_sec": 0,
00:24:00.632              "disable_auto_failback": false,
00:24:00.632              "generate_uuids": false,
00:24:00.632              "transport_tos": 0,
00:24:00.632              "nvme_error_stat": false,
00:24:00.632              "rdma_srq_size": 0,
00:24:00.632              "io_path_stat": false,
00:24:00.632              "allow_accel_sequence": false,
00:24:00.632              "rdma_max_cq_size": 0,
00:24:00.632              "rdma_cm_event_timeout_ms": 0,
00:24:00.632              "dhchap_digests": [
00:24:00.632                "sha256",
00:24:00.632                "sha384",
00:24:00.632                "sha512"
00:24:00.632              ],
00:24:00.632              "dhchap_dhgroups": [
00:24:00.632                "null",
00:24:00.632                "ffdhe2048",
00:24:00.632                "ffdhe3072",
00:24:00.632                "ffdhe4096",
00:24:00.632                "ffdhe6144",
00:24:00.633                "ffdhe8192"
00:24:00.633              ]
00:24:00.633            }
00:24:00.633          },
00:24:00.633          {
00:24:00.633            "method": "bdev_nvme_attach_controller",
00:24:00.633            "params": {
00:24:00.633              "name": "nvme0",
00:24:00.633              "trtype": "TCP",
00:24:00.633              "adrfam": "IPv4",
00:24:00.633              "traddr": "10.0.0.2",
00:24:00.633              "trsvcid": "4420",
00:24:00.633              "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:24:00.633              "prchk_reftag": false,
00:24:00.633              "prchk_guard": false,
00:24:00.633              "ctrlr_loss_timeout_sec": 0,
00:24:00.633              "reconnect_delay_sec": 0,
00:24:00.633              "fast_io_fail_timeout_sec": 0,
00:24:00.633              "psk": "key0",
00:24:00.633              "hostnqn": "nqn.2016-06.io.spdk:host1",
00:24:00.633              "hdgst": false,
00:24:00.633              "ddgst": false,
00:24:00.633              "multipath": "multipath"
00:24:00.633            }
00:24:00.633          },
00:24:00.633          {
00:24:00.633            "method": "bdev_nvme_set_hotplug",
00:24:00.633            "params": {
00:24:00.633              "period_us": 100000,
00:24:00.633              "enable": false
00:24:00.633            }
00:24:00.633          },
00:24:00.633          {
00:24:00.633            "method": "bdev_enable_histogram",
00:24:00.633            "params": {
00:24:00.633              "name": "nvme0n1",
00:24:00.633              "enable": true
00:24:00.633            }
00:24:00.633          },
00:24:00.633          {
00:24:00.633            "method": "bdev_wait_for_examine"
00:24:00.633          }
00:24:00.633        ]
00:24:00.633      },
00:24:00.633      {
00:24:00.633        "subsystem": "nbd",
00:24:00.633        "config": []
00:24:00.633      }
00:24:00.633    ]
00:24:00.633  }'
00:24:00.633   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:24:00.633  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:24:00.633   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable
00:24:00.633   00:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:24:00.633  [2024-12-10 00:05:16.380454] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:24:00.633  [2024-12-10 00:05:16.380501] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3115208 ]
00:24:00.633  [2024-12-10 00:05:16.453080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:00.892  [2024-12-10 00:05:16.494778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:24:00.892  [2024-12-10 00:05:16.648317] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:24:01.460   00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:01.460   00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0
00:24:01.460    00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:24:01.460    00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name'
00:24:01.719   00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:24:01.719   00:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:24:01.719  Running I/O for 1 seconds...
00:24:02.911       5328.00 IOPS,    20.81 MiB/s
00:24:02.911                                                                                                  Latency(us)
00:24:02.911  
[2024-12-09T23:05:18.768Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:02.911  Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:24:02.911  	 Verification LBA range: start 0x0 length 0x2000
00:24:02.911  	 nvme0n1             :       1.01    5388.78      21.05       0.00     0.00   23596.35    5336.50   22469.49
00:24:02.911  
[2024-12-09T23:05:18.768Z]  ===================================================================================================================
00:24:02.911  
[2024-12-09T23:05:18.768Z]  Total                       :               5388.78      21.05       0.00     0.00   23596.35    5336.50   22469.49
00:24:02.911  {
00:24:02.911    "results": [
00:24:02.911      {
00:24:02.911        "job": "nvme0n1",
00:24:02.911        "core_mask": "0x2",
00:24:02.911        "workload": "verify",
00:24:02.911        "status": "finished",
00:24:02.911        "verify_range": {
00:24:02.911          "start": 0,
00:24:02.911          "length": 8192
00:24:02.911        },
00:24:02.911        "queue_depth": 128,
00:24:02.911        "io_size": 4096,
00:24:02.911        "runtime": 1.012474,
00:24:02.911        "iops": 5388.780353865877,
00:24:02.911        "mibps": 21.049923257288583,
00:24:02.911        "io_failed": 0,
00:24:02.911        "io_timeout": 0,
00:24:02.911        "avg_latency_us": 23596.34902387935,
00:24:02.911        "min_latency_us": 5336.5028571428575,
00:24:02.911        "max_latency_us": 22469.485714285714
00:24:02.911      }
00:24:02.911    ],
00:24:02.911    "core_count": 1
00:24:02.911  }
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:24:02.912    00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:24:02.912  nvmf_trace.0
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3115208
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3115208 ']'
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3115208
00:24:02.912    00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:02.912    00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3115208
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3115208'
00:24:02.912  killing process with pid 3115208
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3115208
00:24:02.912  Received shutdown signal, test time was about 1.000000 seconds
00:24:02.912  
00:24:02.912                                                                                                  Latency(us)
00:24:02.912  
[2024-12-09T23:05:18.769Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:02.912  
[2024-12-09T23:05:18.769Z]  ===================================================================================================================
00:24:02.912  
[2024-12-09T23:05:18.769Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:24:02.912   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3115208
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20}
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:24:03.172  rmmod nvme_tcp
00:24:03.172  rmmod nvme_fabrics
00:24:03.172  rmmod nvme_keyring
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3115171 ']'
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3115171
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3115171 ']'
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3115171
00:24:03.172    00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:03.172    00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3115171
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3115171'
00:24:03.172  killing process with pid 3115171
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3115171
00:24:03.172   00:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3115171
00:24:03.432   00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:24:03.432   00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:24:03.432   00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:24:03.432   00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr
00:24:03.432   00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save
00:24:03.432   00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:24:03.432   00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore
00:24:03.432   00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:24:03.432   00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns
00:24:03.432   00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:03.432   00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:03.432    00:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:05.335   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:24:05.335   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.C6e8ioS86X /tmp/tmp.t8hUYPGy9g /tmp/tmp.wYWtJyZDQX
00:24:05.335  
00:24:05.335  real	1m19.254s
00:24:05.335  user	2m1.710s
00:24:05.335  sys	0m30.147s
00:24:05.335   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable
00:24:05.335   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x
00:24:05.335  ************************************
00:24:05.335  END TEST nvmf_tls
00:24:05.335  ************************************
00:24:05.594   00:05:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp
00:24:05.594   00:05:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:24:05.594   00:05:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:24:05.594   00:05:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:24:05.594  ************************************
00:24:05.594  START TEST nvmf_fips
00:24:05.594  ************************************
00:24:05.594   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp
00:24:05.594  * Looking for test storage...
00:24:05.595  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:24:05.595     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version
00:24:05.595     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-:
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-:
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<'
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 ))
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:05.595     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1
00:24:05.595     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1
00:24:05.595     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:05.595     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1
00:24:05.595     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2
00:24:05.595     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2
00:24:05.595     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:05.595     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:24:05.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:05.595  		--rc genhtml_branch_coverage=1
00:24:05.595  		--rc genhtml_function_coverage=1
00:24:05.595  		--rc genhtml_legend=1
00:24:05.595  		--rc geninfo_all_blocks=1
00:24:05.595  		--rc geninfo_unexecuted_blocks=1
00:24:05.595  		
00:24:05.595  		'
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:24:05.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:05.595  		--rc genhtml_branch_coverage=1
00:24:05.595  		--rc genhtml_function_coverage=1
00:24:05.595  		--rc genhtml_legend=1
00:24:05.595  		--rc geninfo_all_blocks=1
00:24:05.595  		--rc geninfo_unexecuted_blocks=1
00:24:05.595  		
00:24:05.595  		'
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:24:05.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:05.595  		--rc genhtml_branch_coverage=1
00:24:05.595  		--rc genhtml_function_coverage=1
00:24:05.595  		--rc genhtml_legend=1
00:24:05.595  		--rc geninfo_all_blocks=1
00:24:05.595  		--rc geninfo_unexecuted_blocks=1
00:24:05.595  		
00:24:05.595  		'
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:24:05.595  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:05.595  		--rc genhtml_branch_coverage=1
00:24:05.595  		--rc genhtml_function_coverage=1
00:24:05.595  		--rc genhtml_legend=1
00:24:05.595  		--rc geninfo_all_blocks=1
00:24:05.595  		--rc geninfo_unexecuted_blocks=1
00:24:05.595  		
00:24:05.595  		'
00:24:05.595   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:24:05.595     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:24:05.595     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:24:05.595    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:24:05.854    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:24:05.854     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob
00:24:05.854     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:24:05.854     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:24:05.854     00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:24:05.854      00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:05.855      00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:05.855      00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:05.855      00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH
00:24:05.855      00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:24:05.855  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}'
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-:
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-:
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>='
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 ))
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]]
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]]
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ ))
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]]
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]]
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode'
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]]
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]]
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat -
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 ))
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[     name: openssl base provider != *base* ]]
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[     name: red hat enterprise linux 9 - openssl fips provider != *fips* ]]
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0
00:24:05.855    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # :
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl
00:24:05.855   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:24:05.856    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:24:05.856    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]]
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62
00:24:05.856  Error setting digest
00:24:05.856  405270603E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties ()
00:24:05.856  405270603E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272:
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:05.856    00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable
00:24:05.856   00:05:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:24:12.443   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:24:12.443   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=()
00:24:12.443   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs
00:24:12.443   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=()
00:24:12.443   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:24:12.443   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=()
00:24:12.443   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers
00:24:12.443   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=()
00:24:12.443   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=()
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=()
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=()
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:24:12.444  Found 0000:af:00.0 (0x8086 - 0x159b)
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:24:12.444  Found 0000:af:00.1 (0x8086 - 0x159b)
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:24:12.444  Found net devices under 0000:af:00.0: cvl_0_0
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:24:12.444  Found net devices under 0000:af:00.1: cvl_0_1
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:24:12.444   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:24:12.444  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:24:12.444  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms
00:24:12.444  
00:24:12.444  --- 10.0.0.2 ping statistics ---
00:24:12.445  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:12.445  rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:24:12.445  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:24:12.445  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms
00:24:12.445  
00:24:12.445  --- 10.0.0.1 ping statistics ---
00:24:12.445  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:12.445  rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3119153
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3119153
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3119153 ']'
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:24:12.445  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable
00:24:12.445   00:05:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:24:12.445  [2024-12-10 00:05:27.578881] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:24:12.445  [2024-12-10 00:05:27.578931] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:24:12.445  [2024-12-10 00:05:27.658204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:12.445  [2024-12-10 00:05:27.699773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:24:12.445  [2024-12-10 00:05:27.699804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:24:12.445  [2024-12-10 00:05:27.699812] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:24:12.445  [2024-12-10 00:05:27.699819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:24:12.445  [2024-12-10 00:05:27.699827] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:24:12.445  [2024-12-10 00:05:27.700322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ:
00:24:12.704    00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.zQ4
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ:
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.zQ4
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.zQ4
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.zQ4
00:24:12.704   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:24:12.963  [2024-12-10 00:05:28.613851] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:24:12.963  [2024-12-10 00:05:28.629850] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:24:12.963  [2024-12-10 00:05:28.630028] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:24:12.963  malloc0
00:24:12.963   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:24:12.963   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3119398
00:24:12.963   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10
00:24:12.963   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3119398 /var/tmp/bdevperf.sock
00:24:12.963   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3119398 ']'
00:24:12.963   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:24:12.963   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100
00:24:12.963   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:24:12.963  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:24:12.963   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable
00:24:12.963   00:05:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:24:12.963  [2024-12-10 00:05:28.756939] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:24:12.963  [2024-12-10 00:05:28.756985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3119398 ]
00:24:13.222  [2024-12-10 00:05:28.831955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:13.222  [2024-12-10 00:05:28.872500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:24:13.789   00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:13.789   00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0
00:24:13.789   00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.zQ4
00:24:14.048   00:05:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0
00:24:14.307  [2024-12-10 00:05:29.945860] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:24:14.307  TLSTESTn1
00:24:14.307   00:05:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:24:14.307  Running I/O for 10 seconds...
00:24:16.618       5579.00 IOPS,    21.79 MiB/s
[2024-12-09T23:05:33.413Z]      5609.50 IOPS,    21.91 MiB/s
[2024-12-09T23:05:34.348Z]      5672.00 IOPS,    22.16 MiB/s
[2024-12-09T23:05:35.284Z]      5620.50 IOPS,    21.96 MiB/s
[2024-12-09T23:05:36.218Z]      5664.20 IOPS,    22.13 MiB/s
[2024-12-09T23:05:37.155Z]      5602.33 IOPS,    21.88 MiB/s
[2024-12-09T23:05:38.533Z]      5585.86 IOPS,    21.82 MiB/s
[2024-12-09T23:05:39.469Z]      5456.88 IOPS,    21.32 MiB/s
[2024-12-09T23:05:40.406Z]      5445.11 IOPS,    21.27 MiB/s
[2024-12-09T23:05:40.406Z]      5391.90 IOPS,    21.06 MiB/s
00:24:24.549                                                                                                  Latency(us)
00:24:24.549  
[2024-12-09T23:05:40.406Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:24.549  Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:24:24.549  	 Verification LBA range: start 0x0 length 0x2000
00:24:24.549  	 TLSTESTn1           :      10.02    5393.81      21.07       0.00     0.00   23694.61    6210.32   35701.52
00:24:24.549  
[2024-12-09T23:05:40.406Z]  ===================================================================================================================
00:24:24.549  
[2024-12-09T23:05:40.406Z]  Total                       :               5393.81      21.07       0.00     0.00   23694.61    6210.32   35701.52
00:24:24.549  {
00:24:24.549    "results": [
00:24:24.549      {
00:24:24.549        "job": "TLSTESTn1",
00:24:24.549        "core_mask": "0x4",
00:24:24.549        "workload": "verify",
00:24:24.549        "status": "finished",
00:24:24.549        "verify_range": {
00:24:24.549          "start": 0,
00:24:24.549          "length": 8192
00:24:24.549        },
00:24:24.549        "queue_depth": 128,
00:24:24.549        "io_size": 4096,
00:24:24.549        "runtime": 10.020187,
00:24:24.549        "iops": 5393.811512699314,
00:24:24.549        "mibps": 21.069576221481697,
00:24:24.549        "io_failed": 0,
00:24:24.549        "io_timeout": 0,
00:24:24.549        "avg_latency_us": 23694.61096474233,
00:24:24.549        "min_latency_us": 6210.31619047619,
00:24:24.549        "max_latency_us": 35701.51619047619
00:24:24.549      }
00:24:24.549    ],
00:24:24.549    "core_count": 1
00:24:24.549  }
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:24:24.549    00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:24:24.549  nvmf_trace.0
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3119398
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3119398 ']'
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3119398
00:24:24.549    00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:24.549    00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3119398
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3119398'
00:24:24.549  killing process with pid 3119398
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3119398
00:24:24.549  Received shutdown signal, test time was about 10.000000 seconds
00:24:24.549  
00:24:24.549                                                                                                  Latency(us)
00:24:24.549  
[2024-12-09T23:05:40.406Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:24:24.549  
[2024-12-09T23:05:40.406Z]  ===================================================================================================================
00:24:24.549  
[2024-12-09T23:05:40.406Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:24:24.549   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3119398
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20}
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:24:24.810  rmmod nvme_tcp
00:24:24.810  rmmod nvme_fabrics
00:24:24.810  rmmod nvme_keyring
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3119153 ']'
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3119153
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3119153 ']'
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3119153
00:24:24.810    00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:24.810    00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3119153
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3119153'
00:24:24.810  killing process with pid 3119153
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3119153
00:24:24.810   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3119153
00:24:25.071   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:24:25.071   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:24:25.071   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:24:25.071   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr
00:24:25.071   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save
00:24:25.071   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:24:25.071   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore
00:24:25.071   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:24:25.071   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns
00:24:25.071   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:25.071   00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:25.071    00:05:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:27.608   00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:24:27.608   00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.zQ4
00:24:27.608  
00:24:27.608  real	0m21.596s
00:24:27.608  user	0m23.215s
00:24:27.608  sys	0m9.797s
00:24:27.608   00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable
00:24:27.608   00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x
00:24:27.608  ************************************
00:24:27.608  END TEST nvmf_fips
00:24:27.608  ************************************
00:24:27.608   00:05:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp
00:24:27.608   00:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:24:27.608   00:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:24:27.608   00:05:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:24:27.608  ************************************
00:24:27.608  START TEST nvmf_control_msg_list
00:24:27.608  ************************************
00:24:27.608   00:05:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp
00:24:27.608  * Looking for test storage...
00:24:27.608  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:24:27.608     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version
00:24:27.608     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-:
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-:
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<'
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 ))
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:27.608     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1
00:24:27.608     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1
00:24:27.608     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:27.608     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1
00:24:27.608     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2
00:24:27.608     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2
00:24:27.608     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:27.608     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:27.608    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:24:27.608  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:27.608  		--rc genhtml_branch_coverage=1
00:24:27.608  		--rc genhtml_function_coverage=1
00:24:27.608  		--rc genhtml_legend=1
00:24:27.609  		--rc geninfo_all_blocks=1
00:24:27.609  		--rc geninfo_unexecuted_blocks=1
00:24:27.609  		
00:24:27.609  		'
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:24:27.609  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:27.609  		--rc genhtml_branch_coverage=1
00:24:27.609  		--rc genhtml_function_coverage=1
00:24:27.609  		--rc genhtml_legend=1
00:24:27.609  		--rc geninfo_all_blocks=1
00:24:27.609  		--rc geninfo_unexecuted_blocks=1
00:24:27.609  		
00:24:27.609  		'
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:24:27.609  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:27.609  		--rc genhtml_branch_coverage=1
00:24:27.609  		--rc genhtml_function_coverage=1
00:24:27.609  		--rc genhtml_legend=1
00:24:27.609  		--rc geninfo_all_blocks=1
00:24:27.609  		--rc geninfo_unexecuted_blocks=1
00:24:27.609  		
00:24:27.609  		'
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:24:27.609  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:27.609  		--rc genhtml_branch_coverage=1
00:24:27.609  		--rc genhtml_function_coverage=1
00:24:27.609  		--rc genhtml_legend=1
00:24:27.609  		--rc geninfo_all_blocks=1
00:24:27.609  		--rc geninfo_unexecuted_blocks=1
00:24:27.609  		
00:24:27.609  		'
00:24:27.609   00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:24:27.609     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:24:27.609     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:24:27.609     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob
00:24:27.609     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:24:27.609     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:24:27.609     00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:24:27.609      00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:27.609      00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:27.609      00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:27.609      00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH
00:24:27.609      00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:24:27.609  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0
00:24:27.609   00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit
00:24:27.609   00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:24:27.609   00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:24:27.609   00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs
00:24:27.609   00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no
00:24:27.609   00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns
00:24:27.609   00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:27.609   00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:27.609    00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:27.609   00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:24:27.609   00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:24:27.609   00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable
00:24:27.609   00:05:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:24:32.883   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:24:32.883   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=()
00:24:32.883   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=()
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=()
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=()
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=()
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=()
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=()
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:24:32.884  Found 0000:af:00.0 (0x8086 - 0x159b)
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:24:32.884  Found 0000:af:00.1 (0x8086 - 0x159b)
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]]
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:24:32.884  Found net devices under 0000:af:00.0: cvl_0_0
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:32.884   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]]
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:24:33.156  Found net devices under 0000:af:00.1: cvl_0_1
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:24:33.156   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:24:33.157  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:24:33.157  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms
00:24:33.157  
00:24:33.157  --- 10.0.0.2 ping statistics ---
00:24:33.157  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:33.157  rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms
00:24:33.157   00:05:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:24:33.157  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:24:33.157  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms
00:24:33.157  
00:24:33.157  --- 10.0.0.1 ping statistics ---
00:24:33.157  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:33.157  rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms
00:24:33.157   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:24:33.157   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0
00:24:33.158   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:24:33.158   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:24:33.158   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:24:33.158   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:24:33.158   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:24:33.158   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:24:33.158   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:24:33.418   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart
00:24:33.418   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:24:33.418   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable
00:24:33.418   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:24:33.418   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3124864
00:24:33.418   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3124864
00:24:33.418   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:24:33.418   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3124864 ']'
00:24:33.418   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:24:33.418   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100
00:24:33.418   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:24:33.418  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:24:33.418   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable
00:24:33.418   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:24:33.418  [2024-12-10 00:05:49.100431] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:24:33.418  [2024-12-10 00:05:49.100484] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:24:33.418  [2024-12-10 00:05:49.179731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:33.418  [2024-12-10 00:05:49.219291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:24:33.418  [2024-12-10 00:05:49.219328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:24:33.418  [2024-12-10 00:05:49.219334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:24:33.418  [2024-12-10 00:05:49.219340] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:24:33.418  [2024-12-10 00:05:49.219345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:24:33.418  [2024-12-10 00:05:49.219824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:24:33.678  [2024-12-10 00:05:49.355744] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:24:33.678  Malloc0
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:24:33.678  [2024-12-10 00:05:49.395782] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3124890
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3124891
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3124892
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3124890
00:24:33.678   00:05:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:24:33.678  [2024-12-10 00:05:49.490476] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:24:33.678  [2024-12-10 00:05:49.490648] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:24:33.678  [2024-12-10 00:05:49.490803] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:24:35.053  Initializing NVMe Controllers
00:24:35.053  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0
00:24:35.053  Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2
00:24:35.053  Initialization complete. Launching workers.
00:24:35.053  ========================================================
00:24:35.053                                                                                                               Latency(us)
00:24:35.053  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:24:35.053  TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core  2:      25.00       0.10   40952.24   40786.34   41920.22
00:24:35.053  ========================================================
00:24:35.053  Total                                                                    :      25.00       0.10   40952.24   40786.34   41920.22
00:24:35.053  
00:24:35.053  Initializing NVMe Controllers
00:24:35.053  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0
00:24:35.053  Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1
00:24:35.053  Initialization complete. Launching workers.
00:24:35.053  ========================================================
00:24:35.053                                                                                                               Latency(us)
00:24:35.053  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:24:35.053  TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core  1:    6845.99      26.74     145.74     131.14     431.66
00:24:35.053  ========================================================
00:24:35.053  Total                                                                    :    6845.99      26.74     145.74     131.14     431.66
00:24:35.053  
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3124891
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3124892
00:24:35.053  Initializing NVMe Controllers
00:24:35.053  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0
00:24:35.053  Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3
00:24:35.053  Initialization complete. Launching workers.
00:24:35.053  ========================================================
00:24:35.053                                                                                                               Latency(us)
00:24:35.053  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:24:35.053  TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core  3:      25.00       0.10   40966.82   40582.46   41894.07
00:24:35.053  ========================================================
00:24:35.053  Total                                                                    :      25.00       0.10   40966.82   40582.46   41894.07
00:24:35.053  
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20}
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:24:35.053  rmmod nvme_tcp
00:24:35.053  rmmod nvme_fabrics
00:24:35.053  rmmod nvme_keyring
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3124864 ']'
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3124864
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3124864 ']'
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3124864
00:24:35.053    00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:35.053    00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3124864
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3124864'
00:24:35.053  killing process with pid 3124864
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3124864
00:24:35.053   00:05:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3124864
00:24:35.313   00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:24:35.313   00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:24:35.313   00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:24:35.313   00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr
00:24:35.313   00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save
00:24:35.313   00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:24:35.313   00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore
00:24:35.313   00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:24:35.313   00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns
00:24:35.313   00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:35.313   00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:35.313    00:05:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:37.851   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:24:37.851  
00:24:37.851  real	0m10.190s
00:24:37.851  user	0m6.720s
00:24:37.851  sys	0m5.416s
00:24:37.851   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable
00:24:37.851   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x
00:24:37.851  ************************************
00:24:37.851  END TEST nvmf_control_msg_list
00:24:37.851  ************************************
00:24:37.851   00:05:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp
00:24:37.851   00:05:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:24:37.851   00:05:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:24:37.851   00:05:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:24:37.851  ************************************
00:24:37.851  START TEST nvmf_wait_for_buf
00:24:37.851  ************************************
00:24:37.851   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp
00:24:37.851  * Looking for test storage...
00:24:37.851  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:24:37.851    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:24:37.851     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version
00:24:37.851     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:24:37.851    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:24:37.851    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:24:37.851    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:24:37.851    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:24:37.851    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-:
00:24:37.851    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1
00:24:37.851    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-:
00:24:37.851    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2
00:24:37.851    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<'
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 ))
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:37.852     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1
00:24:37.852     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1
00:24:37.852     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:37.852     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1
00:24:37.852     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2
00:24:37.852     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2
00:24:37.852     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:37.852     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:24:37.852  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:37.852  		--rc genhtml_branch_coverage=1
00:24:37.852  		--rc genhtml_function_coverage=1
00:24:37.852  		--rc genhtml_legend=1
00:24:37.852  		--rc geninfo_all_blocks=1
00:24:37.852  		--rc geninfo_unexecuted_blocks=1
00:24:37.852  		
00:24:37.852  		'
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:24:37.852  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:37.852  		--rc genhtml_branch_coverage=1
00:24:37.852  		--rc genhtml_function_coverage=1
00:24:37.852  		--rc genhtml_legend=1
00:24:37.852  		--rc geninfo_all_blocks=1
00:24:37.852  		--rc geninfo_unexecuted_blocks=1
00:24:37.852  		
00:24:37.852  		'
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:24:37.852  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:37.852  		--rc genhtml_branch_coverage=1
00:24:37.852  		--rc genhtml_function_coverage=1
00:24:37.852  		--rc genhtml_legend=1
00:24:37.852  		--rc geninfo_all_blocks=1
00:24:37.852  		--rc geninfo_unexecuted_blocks=1
00:24:37.852  		
00:24:37.852  		'
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:24:37.852  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:37.852  		--rc genhtml_branch_coverage=1
00:24:37.852  		--rc genhtml_function_coverage=1
00:24:37.852  		--rc genhtml_legend=1
00:24:37.852  		--rc geninfo_all_blocks=1
00:24:37.852  		--rc geninfo_unexecuted_blocks=1
00:24:37.852  		
00:24:37.852  		'
00:24:37.852   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:24:37.852     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:24:37.852     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:24:37.852    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:24:37.852     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob
00:24:37.852     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:24:37.853     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:24:37.853     00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:24:37.853      00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:37.853      00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:37.853      00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:37.853      00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH
00:24:37.853      00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:37.853    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0
00:24:37.853    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:24:37.853    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:24:37.853    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:24:37.853    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:24:37.853    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:24:37.853    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:24:37.853  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:24:37.853    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:24:37.853    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:24:37.853    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0
00:24:37.853   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit
00:24:37.853   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:24:37.853   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:24:37.853   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs
00:24:37.853   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no
00:24:37.853   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns
00:24:37.853   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:37.853   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:37.853    00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:37.853   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:24:37.853   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:24:37.853   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable
00:24:37.853   00:05:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=()
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=()
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=()
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=()
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=()
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=()
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=()
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:24:43.323   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:24:43.324  Found 0000:af:00.0 (0x8086 - 0x159b)
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:24:43.324  Found 0000:af:00.1 (0x8086 - 0x159b)
00:24:43.324   00:05:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]]
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:24:43.324  Found net devices under 0000:af:00.0: cvl_0_0
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]]
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:24:43.324  Found net devices under 0000:af:00.1: cvl_0_1
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:24:43.324   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:24:43.583   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:24:43.584  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:24:43.584  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms
00:24:43.584  
00:24:43.584  --- 10.0.0.2 ping statistics ---
00:24:43.584  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:43.584  rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:24:43.584  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:24:43.584  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms
00:24:43.584  
00:24:43.584  --- 10.0.0.1 ping statistics ---
00:24:43.584  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:24:43.584  rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3128584
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3128584
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3128584 ']'
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:24:43.584  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable
00:24:43.584   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:43.584  [2024-12-10 00:05:59.337954] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:24:43.584  [2024-12-10 00:05:59.337999] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:24:43.584  [2024-12-10 00:05:59.414102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:24:43.843  [2024-12-10 00:05:59.454097] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:24:43.843  [2024-12-10 00:05:59.454129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:24:43.843  [2024-12-10 00:05:59.454136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:24:43.843  [2024-12-10 00:05:59.454142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:24:43.843  [2024-12-10 00:05:59.454147] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:24:43.843  [2024-12-10 00:05:59.454632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:43.843  Malloc0
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:43.843  [2024-12-10 00:05:59.624170] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:43.843   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:43.844  [2024-12-10 00:05:59.652350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:24:43.844   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:43.844   00:05:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:24:44.102  [2024-12-10 00:05:59.735236] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:24:45.477  Initializing NVMe Controllers
00:24:45.477  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0
00:24:45.477  Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0
00:24:45.477  Initialization complete. Launching workers.
00:24:45.477  ========================================================
00:24:45.477                                                                                                               Latency(us)
00:24:45.477  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:24:45.477  TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core  0:      29.00       3.62  147251.64    7266.46  200512.92
00:24:45.477  ========================================================
00:24:45.477  Total                                                                    :      29.00       3.62  147251.64    7266.46  200512.92
00:24:45.477  
00:24:45.477    00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats
00:24:45.477    00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry'
00:24:45.477    00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable
00:24:45.477    00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:45.477    00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=438
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 438 -eq 0 ]]
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20}
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:24:45.477  rmmod nvme_tcp
00:24:45.477  rmmod nvme_fabrics
00:24:45.477  rmmod nvme_keyring
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3128584 ']'
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3128584
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3128584 ']'
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3128584
00:24:45.477    00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname
00:24:45.477   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:24:45.736    00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3128584
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3128584'
00:24:45.736  killing process with pid 3128584
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3128584
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3128584
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:24:45.736   00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:24:45.736    00:06:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:24:48.269   00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:24:48.269  
00:24:48.269  real	0m10.414s
00:24:48.269  user	0m4.021s
00:24:48.269  sys	0m4.826s
00:24:48.269   00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:24:48.269   00:06:03 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x
00:24:48.269  ************************************
00:24:48.269  END TEST nvmf_wait_for_buf
00:24:48.269  ************************************
00:24:48.269   00:06:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']'
00:24:48.269   00:06:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]]
00:24:48.269   00:06:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']'
00:24:48.269   00:06:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs
00:24:48.269   00:06:03 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable
00:24:48.269   00:06:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=()
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=()
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=()
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=()
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=()
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=()
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=()
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:24:53.543  Found 0000:af:00.0 (0x8086 - 0x159b)
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:24:53.543  Found 0000:af:00.1 (0x8086 - 0x159b)
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:24:53.543   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]]
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:24:53.544  Found net devices under 0000:af:00.0: cvl_0_0
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]]
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:24:53.544  Found net devices under 0000:af:00.1: cvl_0_1
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 ))
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:24:53.544  ************************************
00:24:53.544  START TEST nvmf_perf_adq
00:24:53.544  ************************************
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp
00:24:53.544  * Looking for test storage...
00:24:53.544  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:24:53.544     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version
00:24:53.544     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-:
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-:
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<'
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 ))
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:24:53.544     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1
00:24:53.544     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1
00:24:53.544     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:24:53.544     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1
00:24:53.544     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2
00:24:53.544     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2
00:24:53.544     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:24:53.544     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:24:53.544  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:53.544  		--rc genhtml_branch_coverage=1
00:24:53.544  		--rc genhtml_function_coverage=1
00:24:53.544  		--rc genhtml_legend=1
00:24:53.544  		--rc geninfo_all_blocks=1
00:24:53.544  		--rc geninfo_unexecuted_blocks=1
00:24:53.544  		
00:24:53.544  		'
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:24:53.544  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:53.544  		--rc genhtml_branch_coverage=1
00:24:53.544  		--rc genhtml_function_coverage=1
00:24:53.544  		--rc genhtml_legend=1
00:24:53.544  		--rc geninfo_all_blocks=1
00:24:53.544  		--rc geninfo_unexecuted_blocks=1
00:24:53.544  		
00:24:53.544  		'
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:24:53.544  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:53.544  		--rc genhtml_branch_coverage=1
00:24:53.544  		--rc genhtml_function_coverage=1
00:24:53.544  		--rc genhtml_legend=1
00:24:53.544  		--rc geninfo_all_blocks=1
00:24:53.544  		--rc geninfo_unexecuted_blocks=1
00:24:53.544  		
00:24:53.544  		'
00:24:53.544    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:24:53.544  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:24:53.544  		--rc genhtml_branch_coverage=1
00:24:53.544  		--rc genhtml_function_coverage=1
00:24:53.544  		--rc genhtml_legend=1
00:24:53.544  		--rc geninfo_all_blocks=1
00:24:53.544  		--rc geninfo_unexecuted_blocks=1
00:24:53.544  		
00:24:53.544  		'
00:24:53.544   00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:24:53.804     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:24:53.804     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:24:53.804     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob
00:24:53.804     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:24:53.804     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:24:53.804     00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:24:53.804      00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:53.804      00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:53.804      00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:53.804      00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH
00:24:53.804      00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:24:53.804  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:24:53.804    00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0
00:24:53.804   00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs
00:24:53.804   00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable
00:24:53.804   00:06:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:00.375   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=()
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=()
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=()
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=()
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=()
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=()
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=()
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:25:00.376  Found 0000:af:00.0 (0x8086 - 0x159b)
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:25:00.376  Found 0000:af:00.1 (0x8086 - 0x159b)
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:25:00.376  Found net devices under 0000:af:00.0: cvl_0_0
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]]
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:25:00.376  Found net devices under 0000:af:00.1: cvl_0_1
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:00.376   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:25:00.377   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:25:00.377   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 ))
00:25:00.377   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf
00:25:00.377   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver
00:25:00.377   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio
00:25:00.377   00:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice
00:25:00.377   00:06:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice
00:25:02.913   00:06:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:08.191    00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=()
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=()
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=()
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=()
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=()
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=()
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=()
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:25:08.191  Found 0000:af:00.0 (0x8086 - 0x159b)
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:25:08.191  Found 0000:af:00.1 (0x8086 - 0x159b)
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]]
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:25:08.191  Found net devices under 0000:af:00.0: cvl_0_0
00:25:08.191   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]]
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:25:08.192  Found net devices under 0000:af:00.1: cvl_0_1
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:25:08.192  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:25:08.192  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.717 ms
00:25:08.192  
00:25:08.192  --- 10.0.0.2 ping statistics ---
00:25:08.192  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:08.192  rtt min/avg/max/mdev = 0.717/0.717/0.717/0.000 ms
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:25:08.192  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:25:08.192  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms
00:25:08.192  
00:25:08.192  --- 10.0.0.1 ping statistics ---
00:25:08.192  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:08.192  rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3137424
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3137424
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3137424 ']'
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:08.192  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:08.192   00:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:08.192  [2024-12-10 00:06:23.881978] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:25:08.192  [2024-12-10 00:06:23.882026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:08.192  [2024-12-10 00:06:23.957709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:25:08.192  [2024-12-10 00:06:23.999822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:08.192  [2024-12-10 00:06:23.999861] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:08.192  [2024-12-10 00:06:23.999868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:25:08.192  [2024-12-10 00:06:23.999874] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:25:08.192  [2024-12-10 00:06:23.999879] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:08.192  [2024-12-10 00:06:24.001343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:25:08.192  [2024-12-10 00:06:24.001451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:25:08.192  [2024-12-10 00:06:24.001560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:25:08.192  [2024-12-10 00:06:24.001561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:25:08.192   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:08.193   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0
00:25:08.193   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:25:08.193   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:08.193   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0
00:25:08.457    00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl
00:25:08.457    00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name
00:25:08.457    00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:08.457    00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:08.457    00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:08.457  [2024-12-10 00:06:24.207224] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:08.457  Malloc1
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:08.457  [2024-12-10 00:06:24.272472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3137628
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2
00:25:08.457   00:06:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 	subnqn:nqn.2016-06.io.spdk:cnode1'
00:25:10.989    00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats
00:25:10.989    00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:10.989    00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:10.989    00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:10.989   00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{
00:25:10.989  "tick_rate": 2100000000,
00:25:10.989  "poll_groups": [
00:25:10.989  {
00:25:10.989  "name": "nvmf_tgt_poll_group_000",
00:25:10.989  "admin_qpairs": 1,
00:25:10.989  "io_qpairs": 1,
00:25:10.989  "current_admin_qpairs": 1,
00:25:10.989  "current_io_qpairs": 1,
00:25:10.989  "pending_bdev_io": 0,
00:25:10.989  "completed_nvme_io": 20411,
00:25:10.989  "transports": [
00:25:10.989  {
00:25:10.989  "trtype": "TCP"
00:25:10.989  }
00:25:10.989  ]
00:25:10.989  },
00:25:10.989  {
00:25:10.989  "name": "nvmf_tgt_poll_group_001",
00:25:10.989  "admin_qpairs": 0,
00:25:10.989  "io_qpairs": 1,
00:25:10.989  "current_admin_qpairs": 0,
00:25:10.989  "current_io_qpairs": 1,
00:25:10.989  "pending_bdev_io": 0,
00:25:10.989  "completed_nvme_io": 20435,
00:25:10.989  "transports": [
00:25:10.989  {
00:25:10.989  "trtype": "TCP"
00:25:10.989  }
00:25:10.989  ]
00:25:10.989  },
00:25:10.989  {
00:25:10.989  "name": "nvmf_tgt_poll_group_002",
00:25:10.989  "admin_qpairs": 0,
00:25:10.989  "io_qpairs": 1,
00:25:10.989  "current_admin_qpairs": 0,
00:25:10.989  "current_io_qpairs": 1,
00:25:10.989  "pending_bdev_io": 0,
00:25:10.989  "completed_nvme_io": 20496,
00:25:10.989  "transports": [
00:25:10.989  {
00:25:10.989  "trtype": "TCP"
00:25:10.989  }
00:25:10.989  ]
00:25:10.989  },
00:25:10.989  {
00:25:10.989  "name": "nvmf_tgt_poll_group_003",
00:25:10.989  "admin_qpairs": 0,
00:25:10.989  "io_qpairs": 1,
00:25:10.989  "current_admin_qpairs": 0,
00:25:10.989  "current_io_qpairs": 1,
00:25:10.989  "pending_bdev_io": 0,
00:25:10.989  "completed_nvme_io": 20498,
00:25:10.989  "transports": [
00:25:10.989  {
00:25:10.989  "trtype": "TCP"
00:25:10.989  }
00:25:10.989  ]
00:25:10.989  }
00:25:10.989  ]
00:25:10.989  }'
00:25:10.989    00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length'
00:25:10.989    00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l
00:25:10.989   00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4
00:25:10.989   00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]]
00:25:10.989   00:06:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3137628
00:25:19.123  Initializing NVMe Controllers
00:25:19.123  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:25:19.123  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4
00:25:19.123  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5
00:25:19.123  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6
00:25:19.123  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7
00:25:19.123  Initialization complete. Launching workers.
00:25:19.123  ========================================================
00:25:19.123                                                                                                               Latency(us)
00:25:19.123  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:25:19.123  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  4:   10572.38      41.30    6053.33    2341.74   10034.64
00:25:19.123  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  5:   10728.38      41.91    5964.86    2089.16   10460.39
00:25:19.123  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  6:   10752.48      42.00    5952.98    1605.54   12798.68
00:25:19.123  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  7:   10646.08      41.59    6010.47    2080.14   10299.26
00:25:19.123  ========================================================
00:25:19.123  Total                                                                    :   42699.31     166.79    5995.14    1605.54   12798.68
00:25:19.123  
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20}
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:25:19.123  rmmod nvme_tcp
00:25:19.123  rmmod nvme_fabrics
00:25:19.123  rmmod nvme_keyring
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3137424 ']'
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3137424
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3137424 ']'
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3137424
00:25:19.123    00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:19.123    00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3137424
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3137424'
00:25:19.123  killing process with pid 3137424
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3137424
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3137424
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:19.123   00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:19.123    00:06:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:21.038   00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:25:21.038   00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver
00:25:21.038   00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio
00:25:21.038   00:06:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice
00:25:22.416   00:06:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice
00:25:24.950   00:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:30.224    00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=()
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=()
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=()
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=()
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=()
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=()
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=()
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:25:30.224  Found 0000:af:00.0 (0x8086 - 0x159b)
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:25:30.224  Found 0000:af:00.1 (0x8086 - 0x159b)
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:25:30.224  Found net devices under 0000:af:00.0: cvl_0_0
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:25:30.224   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]]
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:25:30.225  Found net devices under 0000:af:00.1: cvl_0_1
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:25:30.225  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:25:30.225  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.465 ms
00:25:30.225  
00:25:30.225  --- 10.0.0.2 ping statistics ---
00:25:30.225  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:30.225  rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:25:30.225  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:25:30.225  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms
00:25:30.225  
00:25:30.225  --- 10.0.0.1 ping statistics ---
00:25:30.225  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:30.225  rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1
00:25:30.225  net.core.busy_poll = 1
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1
00:25:30.225  net.core.busy_read = 1
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel
00:25:30.225   00:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3141442
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3141442
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3141442 ']'
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:30.225  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:30.225   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:30.485  [2024-12-10 00:06:46.107974] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:25:30.485  [2024-12-10 00:06:46.108019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:30.485  [2024-12-10 00:06:46.184218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:25:30.485  [2024-12-10 00:06:46.227627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:30.485  [2024-12-10 00:06:46.227664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:30.485  [2024-12-10 00:06:46.227671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:25:30.485  [2024-12-10 00:06:46.227677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:25:30.485  [2024-12-10 00:06:46.227683] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:30.485  [2024-12-10 00:06:46.229103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:25:30.485  [2024-12-10 00:06:46.229215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:25:30.485  [2024-12-10 00:06:46.229253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:25:30.485  [2024-12-10 00:06:46.229254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:25:30.485   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:30.485   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0
00:25:30.485   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:25:30.485   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:30.485   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:30.485   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:25:30.485   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1
00:25:30.485    00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl
00:25:30.485    00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name
00:25:30.485    00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:30.485    00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:30.485    00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:30.485   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix
00:25:30.485   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix
00:25:30.485   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:30.485   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:30.743  [2024-12-10 00:06:46.430854] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:30.743  Malloc1
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:30.743  [2024-12-10 00:06:46.503881] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3141676
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2
00:25:30.743   00:06:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 	subnqn:nqn.2016-06.io.spdk:cnode1'
00:25:33.275    00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats
00:25:33.275    00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:33.275    00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:33.275    00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:33.275   00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{
00:25:33.275  "tick_rate": 2100000000,
00:25:33.275  "poll_groups": [
00:25:33.275  {
00:25:33.275  "name": "nvmf_tgt_poll_group_000",
00:25:33.275  "admin_qpairs": 1,
00:25:33.275  "io_qpairs": 1,
00:25:33.275  "current_admin_qpairs": 1,
00:25:33.275  "current_io_qpairs": 1,
00:25:33.275  "pending_bdev_io": 0,
00:25:33.275  "completed_nvme_io": 28218,
00:25:33.275  "transports": [
00:25:33.275  {
00:25:33.275  "trtype": "TCP"
00:25:33.275  }
00:25:33.275  ]
00:25:33.275  },
00:25:33.275  {
00:25:33.275  "name": "nvmf_tgt_poll_group_001",
00:25:33.275  "admin_qpairs": 0,
00:25:33.275  "io_qpairs": 3,
00:25:33.275  "current_admin_qpairs": 0,
00:25:33.275  "current_io_qpairs": 3,
00:25:33.275  "pending_bdev_io": 0,
00:25:33.275  "completed_nvme_io": 29490,
00:25:33.275  "transports": [
00:25:33.275  {
00:25:33.275  "trtype": "TCP"
00:25:33.275  }
00:25:33.275  ]
00:25:33.275  },
00:25:33.275  {
00:25:33.275  "name": "nvmf_tgt_poll_group_002",
00:25:33.275  "admin_qpairs": 0,
00:25:33.275  "io_qpairs": 0,
00:25:33.275  "current_admin_qpairs": 0,
00:25:33.275  "current_io_qpairs": 0,
00:25:33.275  "pending_bdev_io": 0,
00:25:33.275  "completed_nvme_io": 0,
00:25:33.275  "transports": [
00:25:33.275  {
00:25:33.275  "trtype": "TCP"
00:25:33.275  }
00:25:33.275  ]
00:25:33.275  },
00:25:33.275  {
00:25:33.275  "name": "nvmf_tgt_poll_group_003",
00:25:33.275  "admin_qpairs": 0,
00:25:33.275  "io_qpairs": 0,
00:25:33.275  "current_admin_qpairs": 0,
00:25:33.275  "current_io_qpairs": 0,
00:25:33.275  "pending_bdev_io": 0,
00:25:33.275  "completed_nvme_io": 0,
00:25:33.275  "transports": [
00:25:33.275  {
00:25:33.275  "trtype": "TCP"
00:25:33.275  }
00:25:33.275  ]
00:25:33.275  }
00:25:33.275  ]
00:25:33.275  }'
00:25:33.275    00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length'
00:25:33.275    00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l
00:25:33.275   00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2
00:25:33.275   00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]]
00:25:33.275   00:06:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3141676
00:25:41.388  Initializing NVMe Controllers
00:25:41.388  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:25:41.388  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4
00:25:41.388  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5
00:25:41.388  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6
00:25:41.388  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7
00:25:41.388  Initialization complete. Launching workers.
00:25:41.388  ========================================================
00:25:41.388                                                                                                               Latency(us)
00:25:41.388  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:25:41.388  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  4:   15619.67      61.01    4097.11    1791.03    6267.94
00:25:41.388  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  5:    5331.09      20.82   12008.93    1492.98   59540.52
00:25:41.388  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  6:    5224.29      20.41   12254.92    1223.15   60109.77
00:25:41.388  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  7:    4823.50      18.84   13271.42    1841.39   60908.32
00:25:41.388  ========================================================
00:25:41.388  Total                                                                    :   30998.55     121.09    8260.20    1223.15   60908.32
00:25:41.388  
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20}
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:25:41.388  rmmod nvme_tcp
00:25:41.388  rmmod nvme_fabrics
00:25:41.388  rmmod nvme_keyring
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3141442 ']'
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3141442
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3141442 ']'
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3141442
00:25:41.388    00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:41.388    00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3141442
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3141442'
00:25:41.388  killing process with pid 3141442
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3141442
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3141442
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:25:41.388   00:06:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore
00:25:41.388   00:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:25:41.388   00:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns
00:25:41.388   00:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:41.388   00:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:41.388    00:06:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:44.678   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:25:44.678   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT
00:25:44.678  
00:25:44.678  real	0m50.852s
00:25:44.678  user	2m43.724s
00:25:44.678  sys	0m10.216s
00:25:44.678   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable
00:25:44.678   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x
00:25:44.678  ************************************
00:25:44.678  END TEST nvmf_perf_adq
00:25:44.678  ************************************
00:25:44.678   00:07:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp
00:25:44.678   00:07:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:25:44.678   00:07:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:44.678   00:07:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:25:44.678  ************************************
00:25:44.678  START TEST nvmf_shutdown
00:25:44.678  ************************************
00:25:44.678   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp
00:25:44.678  * Looking for test storage...
00:25:44.678  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:25:44.678     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version
00:25:44.678     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-:
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-:
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<'
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 ))
00:25:44.678    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:25:44.678     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1
00:25:44.678     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1
00:25:44.679     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:25:44.679     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1
00:25:44.679     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2
00:25:44.679     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2
00:25:44.679     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:25:44.679     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:25:44.679  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:44.679  		--rc genhtml_branch_coverage=1
00:25:44.679  		--rc genhtml_function_coverage=1
00:25:44.679  		--rc genhtml_legend=1
00:25:44.679  		--rc geninfo_all_blocks=1
00:25:44.679  		--rc geninfo_unexecuted_blocks=1
00:25:44.679  		
00:25:44.679  		'
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:25:44.679  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:44.679  		--rc genhtml_branch_coverage=1
00:25:44.679  		--rc genhtml_function_coverage=1
00:25:44.679  		--rc genhtml_legend=1
00:25:44.679  		--rc geninfo_all_blocks=1
00:25:44.679  		--rc geninfo_unexecuted_blocks=1
00:25:44.679  		
00:25:44.679  		'
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:25:44.679  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:44.679  		--rc genhtml_branch_coverage=1
00:25:44.679  		--rc genhtml_function_coverage=1
00:25:44.679  		--rc genhtml_legend=1
00:25:44.679  		--rc geninfo_all_blocks=1
00:25:44.679  		--rc geninfo_unexecuted_blocks=1
00:25:44.679  		
00:25:44.679  		'
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:25:44.679  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:25:44.679  		--rc genhtml_branch_coverage=1
00:25:44.679  		--rc genhtml_function_coverage=1
00:25:44.679  		--rc genhtml_legend=1
00:25:44.679  		--rc geninfo_all_blocks=1
00:25:44.679  		--rc geninfo_unexecuted_blocks=1
00:25:44.679  		
00:25:44.679  		'
00:25:44.679   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:25:44.679     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:25:44.679     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:25:44.679     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob
00:25:44.679     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:25:44.679     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:25:44.679     00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:25:44.679      00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:44.679      00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:44.679      00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:44.679      00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH
00:25:44.679      00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:25:44.679  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:25:44.679    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:25:44.680  ************************************
00:25:44.680  START TEST nvmf_shutdown_tc1
00:25:44.680  ************************************
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:44.680    00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable
00:25:44.680   00:07:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=()
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=()
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=()
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=()
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=()
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=()
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=()
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:51.259   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:25:51.259  Found 0000:af:00.0 (0x8086 - 0x159b)
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:25:51.260  Found 0000:af:00.1 (0x8086 - 0x159b)
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:25:51.260  Found net devices under 0000:af:00.0: cvl_0_0
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:25:51.260  Found net devices under 0000:af:00.1: cvl_0_1
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:25:51.260   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:25:51.260  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:25:51.260  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms
00:25:51.260  
00:25:51.260  --- 10.0.0.2 ping statistics ---
00:25:51.261  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:51.261  rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:25:51.261  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:25:51.261  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms
00:25:51.261  
00:25:51.261  --- 10.0.0.1 ping statistics ---
00:25:51.261  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:25:51.261  rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3147022
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3147022
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3147022 ']'
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:25:51.261  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:25:51.261  [2024-12-10 00:07:06.434855] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:25:51.261  [2024-12-10 00:07:06.434903] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:25:51.261  [2024-12-10 00:07:06.516892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:25:51.261  [2024-12-10 00:07:06.556230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:25:51.261  [2024-12-10 00:07:06.556266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:25:51.261  [2024-12-10 00:07:06.556272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:25:51.261  [2024-12-10 00:07:06.556278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:25:51.261  [2024-12-10 00:07:06.556283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:25:51.261  [2024-12-10 00:07:06.557777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:25:51.261  [2024-12-10 00:07:06.557887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:25:51.261  [2024-12-10 00:07:06.557971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:25:51.261  [2024-12-10 00:07:06.557971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:25:51.261  [2024-12-10 00:07:06.703269] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10})
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:25:51.261   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:25:51.262   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:25:51.262   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:25:51.262   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:25:51.262   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:25:51.262   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:25:51.262   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat
00:25:51.262   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd
00:25:51.262   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:51.262   00:07:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:25:51.262  Malloc1
00:25:51.262  [2024-12-10 00:07:06.812708] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:25:51.262  Malloc2
00:25:51.262  Malloc3
00:25:51.262  Malloc4
00:25:51.262  Malloc5
00:25:51.262  Malloc6
00:25:51.262  Malloc7
00:25:51.262  Malloc8
00:25:51.522  Malloc9
00:25:51.522  Malloc10
00:25:51.522   00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:51.522   00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems
00:25:51.522   00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable
00:25:51.522   00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:25:51.522   00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3147118
00:25:51.522   00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3147118 /var/tmp/bdevperf.sock
00:25:51.522   00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3147118 ']'
00:25:51.522   00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:25:51.522   00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63
00:25:51.522    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10
00:25:51.522   00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100
00:25:51.522   00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:25:51.522  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:25:51.522    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=()
00:25:51.522   00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable
00:25:51.522    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config
00:25:51.522   00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:25:51.522    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:51.522    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:51.522  {
00:25:51.522    "params": {
00:25:51.522      "name": "Nvme$subsystem",
00:25:51.522      "trtype": "$TEST_TRANSPORT",
00:25:51.522      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:51.522      "adrfam": "ipv4",
00:25:51.522      "trsvcid": "$NVMF_PORT",
00:25:51.522      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:51.522      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:51.522      "hdgst": ${hdgst:-false},
00:25:51.522      "ddgst": ${ddgst:-false}
00:25:51.522    },
00:25:51.522    "method": "bdev_nvme_attach_controller"
00:25:51.522  }
00:25:51.522  EOF
00:25:51.522  )")
00:25:51.522     00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:51.522    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:51.522    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:51.522  {
00:25:51.522    "params": {
00:25:51.522      "name": "Nvme$subsystem",
00:25:51.522      "trtype": "$TEST_TRANSPORT",
00:25:51.522      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:51.522      "adrfam": "ipv4",
00:25:51.522      "trsvcid": "$NVMF_PORT",
00:25:51.522      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:51.522      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:51.522      "hdgst": ${hdgst:-false},
00:25:51.522      "ddgst": ${ddgst:-false}
00:25:51.522    },
00:25:51.522    "method": "bdev_nvme_attach_controller"
00:25:51.522  }
00:25:51.522  EOF
00:25:51.522  )")
00:25:51.522     00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:51.522    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:51.522    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:51.522  {
00:25:51.522    "params": {
00:25:51.522      "name": "Nvme$subsystem",
00:25:51.522      "trtype": "$TEST_TRANSPORT",
00:25:51.523      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:51.523      "adrfam": "ipv4",
00:25:51.523      "trsvcid": "$NVMF_PORT",
00:25:51.523      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:51.523      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:51.523      "hdgst": ${hdgst:-false},
00:25:51.523      "ddgst": ${ddgst:-false}
00:25:51.523    },
00:25:51.523    "method": "bdev_nvme_attach_controller"
00:25:51.523  }
00:25:51.523  EOF
00:25:51.523  )")
00:25:51.523     00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:51.523    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:51.523    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:51.523  {
00:25:51.523    "params": {
00:25:51.523      "name": "Nvme$subsystem",
00:25:51.523      "trtype": "$TEST_TRANSPORT",
00:25:51.523      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:51.523      "adrfam": "ipv4",
00:25:51.523      "trsvcid": "$NVMF_PORT",
00:25:51.523      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:51.523      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:51.523      "hdgst": ${hdgst:-false},
00:25:51.523      "ddgst": ${ddgst:-false}
00:25:51.523    },
00:25:51.523    "method": "bdev_nvme_attach_controller"
00:25:51.523  }
00:25:51.523  EOF
00:25:51.523  )")
00:25:51.523     00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:51.523    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:51.523    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:51.523  {
00:25:51.523    "params": {
00:25:51.523      "name": "Nvme$subsystem",
00:25:51.523      "trtype": "$TEST_TRANSPORT",
00:25:51.523      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:51.523      "adrfam": "ipv4",
00:25:51.523      "trsvcid": "$NVMF_PORT",
00:25:51.523      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:51.523      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:51.523      "hdgst": ${hdgst:-false},
00:25:51.523      "ddgst": ${ddgst:-false}
00:25:51.523    },
00:25:51.523    "method": "bdev_nvme_attach_controller"
00:25:51.523  }
00:25:51.523  EOF
00:25:51.523  )")
00:25:51.523     00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:51.523    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:51.523    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:51.523  {
00:25:51.523    "params": {
00:25:51.523      "name": "Nvme$subsystem",
00:25:51.523      "trtype": "$TEST_TRANSPORT",
00:25:51.523      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:51.523      "adrfam": "ipv4",
00:25:51.523      "trsvcid": "$NVMF_PORT",
00:25:51.523      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:51.523      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:51.523      "hdgst": ${hdgst:-false},
00:25:51.523      "ddgst": ${ddgst:-false}
00:25:51.523    },
00:25:51.523    "method": "bdev_nvme_attach_controller"
00:25:51.523  }
00:25:51.523  EOF
00:25:51.523  )")
00:25:51.523     00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:51.523  [2024-12-10 00:07:07.284020] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:25:51.523  [2024-12-10 00:07:07.284070] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ]
00:25:51.523    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:51.523    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:51.523  {
00:25:51.523    "params": {
00:25:51.523      "name": "Nvme$subsystem",
00:25:51.523      "trtype": "$TEST_TRANSPORT",
00:25:51.523      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:51.523      "adrfam": "ipv4",
00:25:51.523      "trsvcid": "$NVMF_PORT",
00:25:51.523      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:51.523      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:51.523      "hdgst": ${hdgst:-false},
00:25:51.523      "ddgst": ${ddgst:-false}
00:25:51.523    },
00:25:51.523    "method": "bdev_nvme_attach_controller"
00:25:51.523  }
00:25:51.523  EOF
00:25:51.523  )")
00:25:51.523     00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:51.523    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:51.523    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:51.523  {
00:25:51.523    "params": {
00:25:51.523      "name": "Nvme$subsystem",
00:25:51.523      "trtype": "$TEST_TRANSPORT",
00:25:51.523      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:51.523      "adrfam": "ipv4",
00:25:51.523      "trsvcid": "$NVMF_PORT",
00:25:51.523      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:51.523      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:51.523      "hdgst": ${hdgst:-false},
00:25:51.523      "ddgst": ${ddgst:-false}
00:25:51.523    },
00:25:51.523    "method": "bdev_nvme_attach_controller"
00:25:51.523  }
00:25:51.523  EOF
00:25:51.523  )")
00:25:51.523     00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:51.523    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:51.523    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:51.523  {
00:25:51.523    "params": {
00:25:51.523      "name": "Nvme$subsystem",
00:25:51.523      "trtype": "$TEST_TRANSPORT",
00:25:51.523      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:51.523      "adrfam": "ipv4",
00:25:51.523      "trsvcid": "$NVMF_PORT",
00:25:51.524      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:51.524      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:51.524      "hdgst": ${hdgst:-false},
00:25:51.524      "ddgst": ${ddgst:-false}
00:25:51.524    },
00:25:51.524    "method": "bdev_nvme_attach_controller"
00:25:51.524  }
00:25:51.524  EOF
00:25:51.524  )")
00:25:51.524     00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:51.524    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:51.524    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:51.524  {
00:25:51.524    "params": {
00:25:51.524      "name": "Nvme$subsystem",
00:25:51.524      "trtype": "$TEST_TRANSPORT",
00:25:51.524      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:51.524      "adrfam": "ipv4",
00:25:51.524      "trsvcid": "$NVMF_PORT",
00:25:51.524      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:51.524      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:51.524      "hdgst": ${hdgst:-false},
00:25:51.524      "ddgst": ${ddgst:-false}
00:25:51.524    },
00:25:51.524    "method": "bdev_nvme_attach_controller"
00:25:51.524  }
00:25:51.524  EOF
00:25:51.524  )")
00:25:51.524     00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:51.524    00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq .
00:25:51.524     00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=,
00:25:51.524     00:07:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:25:51.524    "params": {
00:25:51.524      "name": "Nvme1",
00:25:51.524      "trtype": "tcp",
00:25:51.524      "traddr": "10.0.0.2",
00:25:51.524      "adrfam": "ipv4",
00:25:51.524      "trsvcid": "4420",
00:25:51.524      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:25:51.524      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:25:51.524      "hdgst": false,
00:25:51.524      "ddgst": false
00:25:51.524    },
00:25:51.524    "method": "bdev_nvme_attach_controller"
00:25:51.524  },{
00:25:51.524    "params": {
00:25:51.524      "name": "Nvme2",
00:25:51.524      "trtype": "tcp",
00:25:51.524      "traddr": "10.0.0.2",
00:25:51.524      "adrfam": "ipv4",
00:25:51.524      "trsvcid": "4420",
00:25:51.524      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:25:51.524      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:25:51.524      "hdgst": false,
00:25:51.524      "ddgst": false
00:25:51.524    },
00:25:51.524    "method": "bdev_nvme_attach_controller"
00:25:51.524  },{
00:25:51.524    "params": {
00:25:51.524      "name": "Nvme3",
00:25:51.524      "trtype": "tcp",
00:25:51.524      "traddr": "10.0.0.2",
00:25:51.524      "adrfam": "ipv4",
00:25:51.524      "trsvcid": "4420",
00:25:51.524      "subnqn": "nqn.2016-06.io.spdk:cnode3",
00:25:51.524      "hostnqn": "nqn.2016-06.io.spdk:host3",
00:25:51.524      "hdgst": false,
00:25:51.524      "ddgst": false
00:25:51.524    },
00:25:51.524    "method": "bdev_nvme_attach_controller"
00:25:51.524  },{
00:25:51.524    "params": {
00:25:51.524      "name": "Nvme4",
00:25:51.524      "trtype": "tcp",
00:25:51.524      "traddr": "10.0.0.2",
00:25:51.524      "adrfam": "ipv4",
00:25:51.524      "trsvcid": "4420",
00:25:51.524      "subnqn": "nqn.2016-06.io.spdk:cnode4",
00:25:51.524      "hostnqn": "nqn.2016-06.io.spdk:host4",
00:25:51.524      "hdgst": false,
00:25:51.524      "ddgst": false
00:25:51.524    },
00:25:51.524    "method": "bdev_nvme_attach_controller"
00:25:51.524  },{
00:25:51.524    "params": {
00:25:51.524      "name": "Nvme5",
00:25:51.524      "trtype": "tcp",
00:25:51.524      "traddr": "10.0.0.2",
00:25:51.524      "adrfam": "ipv4",
00:25:51.524      "trsvcid": "4420",
00:25:51.524      "subnqn": "nqn.2016-06.io.spdk:cnode5",
00:25:51.524      "hostnqn": "nqn.2016-06.io.spdk:host5",
00:25:51.524      "hdgst": false,
00:25:51.524      "ddgst": false
00:25:51.524    },
00:25:51.524    "method": "bdev_nvme_attach_controller"
00:25:51.524  },{
00:25:51.524    "params": {
00:25:51.524      "name": "Nvme6",
00:25:51.524      "trtype": "tcp",
00:25:51.524      "traddr": "10.0.0.2",
00:25:51.524      "adrfam": "ipv4",
00:25:51.524      "trsvcid": "4420",
00:25:51.524      "subnqn": "nqn.2016-06.io.spdk:cnode6",
00:25:51.524      "hostnqn": "nqn.2016-06.io.spdk:host6",
00:25:51.524      "hdgst": false,
00:25:51.524      "ddgst": false
00:25:51.524    },
00:25:51.524    "method": "bdev_nvme_attach_controller"
00:25:51.524  },{
00:25:51.524    "params": {
00:25:51.524      "name": "Nvme7",
00:25:51.524      "trtype": "tcp",
00:25:51.524      "traddr": "10.0.0.2",
00:25:51.524      "adrfam": "ipv4",
00:25:51.524      "trsvcid": "4420",
00:25:51.524      "subnqn": "nqn.2016-06.io.spdk:cnode7",
00:25:51.524      "hostnqn": "nqn.2016-06.io.spdk:host7",
00:25:51.524      "hdgst": false,
00:25:51.524      "ddgst": false
00:25:51.524    },
00:25:51.524    "method": "bdev_nvme_attach_controller"
00:25:51.524  },{
00:25:51.524    "params": {
00:25:51.524      "name": "Nvme8",
00:25:51.524      "trtype": "tcp",
00:25:51.524      "traddr": "10.0.0.2",
00:25:51.524      "adrfam": "ipv4",
00:25:51.524      "trsvcid": "4420",
00:25:51.524      "subnqn": "nqn.2016-06.io.spdk:cnode8",
00:25:51.524      "hostnqn": "nqn.2016-06.io.spdk:host8",
00:25:51.524      "hdgst": false,
00:25:51.524      "ddgst": false
00:25:51.524    },
00:25:51.524    "method": "bdev_nvme_attach_controller"
00:25:51.524  },{
00:25:51.524    "params": {
00:25:51.524      "name": "Nvme9",
00:25:51.524      "trtype": "tcp",
00:25:51.524      "traddr": "10.0.0.2",
00:25:51.524      "adrfam": "ipv4",
00:25:51.524      "trsvcid": "4420",
00:25:51.524      "subnqn": "nqn.2016-06.io.spdk:cnode9",
00:25:51.525      "hostnqn": "nqn.2016-06.io.spdk:host9",
00:25:51.525      "hdgst": false,
00:25:51.525      "ddgst": false
00:25:51.525    },
00:25:51.525    "method": "bdev_nvme_attach_controller"
00:25:51.525  },{
00:25:51.525    "params": {
00:25:51.525      "name": "Nvme10",
00:25:51.525      "trtype": "tcp",
00:25:51.525      "traddr": "10.0.0.2",
00:25:51.525      "adrfam": "ipv4",
00:25:51.525      "trsvcid": "4420",
00:25:51.525      "subnqn": "nqn.2016-06.io.spdk:cnode10",
00:25:51.525      "hostnqn": "nqn.2016-06.io.spdk:host10",
00:25:51.525      "hdgst": false,
00:25:51.525      "ddgst": false
00:25:51.525    },
00:25:51.525    "method": "bdev_nvme_attach_controller"
00:25:51.525  }'
00:25:51.525  [2024-12-10 00:07:07.362228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:51.783  [2024-12-10 00:07:07.402923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:25:53.686   00:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:25:53.686   00:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0
00:25:53.686   00:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:25:53.686   00:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable
00:25:53.686   00:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:25:53.686   00:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:25:53.686   00:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3147118
00:25:53.686   00:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1
00:25:53.686   00:07:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1
00:25:54.623  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3147118 Killed                  $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}")
00:25:54.623   00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3147022
00:25:54.624   00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=()
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:54.624  {
00:25:54.624    "params": {
00:25:54.624      "name": "Nvme$subsystem",
00:25:54.624      "trtype": "$TEST_TRANSPORT",
00:25:54.624      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:54.624      "adrfam": "ipv4",
00:25:54.624      "trsvcid": "$NVMF_PORT",
00:25:54.624      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:54.624      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:54.624      "hdgst": ${hdgst:-false},
00:25:54.624      "ddgst": ${ddgst:-false}
00:25:54.624    },
00:25:54.624    "method": "bdev_nvme_attach_controller"
00:25:54.624  }
00:25:54.624  EOF
00:25:54.624  )")
00:25:54.624     00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:54.624  {
00:25:54.624    "params": {
00:25:54.624      "name": "Nvme$subsystem",
00:25:54.624      "trtype": "$TEST_TRANSPORT",
00:25:54.624      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:54.624      "adrfam": "ipv4",
00:25:54.624      "trsvcid": "$NVMF_PORT",
00:25:54.624      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:54.624      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:54.624      "hdgst": ${hdgst:-false},
00:25:54.624      "ddgst": ${ddgst:-false}
00:25:54.624    },
00:25:54.624    "method": "bdev_nvme_attach_controller"
00:25:54.624  }
00:25:54.624  EOF
00:25:54.624  )")
00:25:54.624     00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:54.624  {
00:25:54.624    "params": {
00:25:54.624      "name": "Nvme$subsystem",
00:25:54.624      "trtype": "$TEST_TRANSPORT",
00:25:54.624      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:54.624      "adrfam": "ipv4",
00:25:54.624      "trsvcid": "$NVMF_PORT",
00:25:54.624      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:54.624      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:54.624      "hdgst": ${hdgst:-false},
00:25:54.624      "ddgst": ${ddgst:-false}
00:25:54.624    },
00:25:54.624    "method": "bdev_nvme_attach_controller"
00:25:54.624  }
00:25:54.624  EOF
00:25:54.624  )")
00:25:54.624     00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:54.624  {
00:25:54.624    "params": {
00:25:54.624      "name": "Nvme$subsystem",
00:25:54.624      "trtype": "$TEST_TRANSPORT",
00:25:54.624      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:54.624      "adrfam": "ipv4",
00:25:54.624      "trsvcid": "$NVMF_PORT",
00:25:54.624      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:54.624      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:54.624      "hdgst": ${hdgst:-false},
00:25:54.624      "ddgst": ${ddgst:-false}
00:25:54.624    },
00:25:54.624    "method": "bdev_nvme_attach_controller"
00:25:54.624  }
00:25:54.624  EOF
00:25:54.624  )")
00:25:54.624     00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:54.624  {
00:25:54.624    "params": {
00:25:54.624      "name": "Nvme$subsystem",
00:25:54.624      "trtype": "$TEST_TRANSPORT",
00:25:54.624      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:54.624      "adrfam": "ipv4",
00:25:54.624      "trsvcid": "$NVMF_PORT",
00:25:54.624      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:54.624      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:54.624      "hdgst": ${hdgst:-false},
00:25:54.624      "ddgst": ${ddgst:-false}
00:25:54.624    },
00:25:54.624    "method": "bdev_nvme_attach_controller"
00:25:54.624  }
00:25:54.624  EOF
00:25:54.624  )")
00:25:54.624     00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:54.624  {
00:25:54.624    "params": {
00:25:54.624      "name": "Nvme$subsystem",
00:25:54.624      "trtype": "$TEST_TRANSPORT",
00:25:54.624      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:54.624      "adrfam": "ipv4",
00:25:54.624      "trsvcid": "$NVMF_PORT",
00:25:54.624      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:54.624      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:54.624      "hdgst": ${hdgst:-false},
00:25:54.624      "ddgst": ${ddgst:-false}
00:25:54.624    },
00:25:54.624    "method": "bdev_nvme_attach_controller"
00:25:54.624  }
00:25:54.624  EOF
00:25:54.624  )")
00:25:54.624     00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:54.624  {
00:25:54.624    "params": {
00:25:54.624      "name": "Nvme$subsystem",
00:25:54.624      "trtype": "$TEST_TRANSPORT",
00:25:54.624      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:54.624      "adrfam": "ipv4",
00:25:54.624      "trsvcid": "$NVMF_PORT",
00:25:54.624      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:54.624      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:54.624      "hdgst": ${hdgst:-false},
00:25:54.624      "ddgst": ${ddgst:-false}
00:25:54.624    },
00:25:54.624    "method": "bdev_nvme_attach_controller"
00:25:54.624  }
00:25:54.624  EOF
00:25:54.624  )")
00:25:54.624     00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:54.624  [2024-12-10 00:07:10.230029] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:25:54.624  [2024-12-10 00:07:10.230081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3147699 ]
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:54.624  {
00:25:54.624    "params": {
00:25:54.624      "name": "Nvme$subsystem",
00:25:54.624      "trtype": "$TEST_TRANSPORT",
00:25:54.624      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:54.624      "adrfam": "ipv4",
00:25:54.624      "trsvcid": "$NVMF_PORT",
00:25:54.624      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:54.624      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:54.624      "hdgst": ${hdgst:-false},
00:25:54.624      "ddgst": ${ddgst:-false}
00:25:54.624    },
00:25:54.624    "method": "bdev_nvme_attach_controller"
00:25:54.624  }
00:25:54.624  EOF
00:25:54.624  )")
00:25:54.624     00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:54.624    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:54.624  {
00:25:54.624    "params": {
00:25:54.624      "name": "Nvme$subsystem",
00:25:54.624      "trtype": "$TEST_TRANSPORT",
00:25:54.624      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:54.624      "adrfam": "ipv4",
00:25:54.624      "trsvcid": "$NVMF_PORT",
00:25:54.624      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:54.624      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:54.624      "hdgst": ${hdgst:-false},
00:25:54.624      "ddgst": ${ddgst:-false}
00:25:54.624    },
00:25:54.624    "method": "bdev_nvme_attach_controller"
00:25:54.625  }
00:25:54.625  EOF
00:25:54.625  )")
00:25:54.625     00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:54.625    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:25:54.625    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:25:54.625  {
00:25:54.625    "params": {
00:25:54.625      "name": "Nvme$subsystem",
00:25:54.625      "trtype": "$TEST_TRANSPORT",
00:25:54.625      "traddr": "$NVMF_FIRST_TARGET_IP",
00:25:54.625      "adrfam": "ipv4",
00:25:54.625      "trsvcid": "$NVMF_PORT",
00:25:54.625      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:25:54.625      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:25:54.625      "hdgst": ${hdgst:-false},
00:25:54.625      "ddgst": ${ddgst:-false}
00:25:54.625    },
00:25:54.625    "method": "bdev_nvme_attach_controller"
00:25:54.625  }
00:25:54.625  EOF
00:25:54.625  )")
00:25:54.625     00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat
00:25:54.625    00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq .
00:25:54.625     00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=,
00:25:54.625     00:07:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:25:54.625    "params": {
00:25:54.625      "name": "Nvme1",
00:25:54.625      "trtype": "tcp",
00:25:54.625      "traddr": "10.0.0.2",
00:25:54.625      "adrfam": "ipv4",
00:25:54.625      "trsvcid": "4420",
00:25:54.625      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:25:54.625      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:25:54.625      "hdgst": false,
00:25:54.625      "ddgst": false
00:25:54.625    },
00:25:54.625    "method": "bdev_nvme_attach_controller"
00:25:54.625  },{
00:25:54.625    "params": {
00:25:54.625      "name": "Nvme2",
00:25:54.625      "trtype": "tcp",
00:25:54.625      "traddr": "10.0.0.2",
00:25:54.625      "adrfam": "ipv4",
00:25:54.625      "trsvcid": "4420",
00:25:54.625      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:25:54.625      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:25:54.625      "hdgst": false,
00:25:54.625      "ddgst": false
00:25:54.625    },
00:25:54.625    "method": "bdev_nvme_attach_controller"
00:25:54.625  },{
00:25:54.625    "params": {
00:25:54.625      "name": "Nvme3",
00:25:54.625      "trtype": "tcp",
00:25:54.625      "traddr": "10.0.0.2",
00:25:54.625      "adrfam": "ipv4",
00:25:54.625      "trsvcid": "4420",
00:25:54.625      "subnqn": "nqn.2016-06.io.spdk:cnode3",
00:25:54.625      "hostnqn": "nqn.2016-06.io.spdk:host3",
00:25:54.625      "hdgst": false,
00:25:54.625      "ddgst": false
00:25:54.625    },
00:25:54.625    "method": "bdev_nvme_attach_controller"
00:25:54.625  },{
00:25:54.625    "params": {
00:25:54.625      "name": "Nvme4",
00:25:54.625      "trtype": "tcp",
00:25:54.625      "traddr": "10.0.0.2",
00:25:54.625      "adrfam": "ipv4",
00:25:54.625      "trsvcid": "4420",
00:25:54.625      "subnqn": "nqn.2016-06.io.spdk:cnode4",
00:25:54.625      "hostnqn": "nqn.2016-06.io.spdk:host4",
00:25:54.625      "hdgst": false,
00:25:54.625      "ddgst": false
00:25:54.625    },
00:25:54.625    "method": "bdev_nvme_attach_controller"
00:25:54.625  },{
00:25:54.625    "params": {
00:25:54.625      "name": "Nvme5",
00:25:54.625      "trtype": "tcp",
00:25:54.625      "traddr": "10.0.0.2",
00:25:54.625      "adrfam": "ipv4",
00:25:54.625      "trsvcid": "4420",
00:25:54.625      "subnqn": "nqn.2016-06.io.spdk:cnode5",
00:25:54.625      "hostnqn": "nqn.2016-06.io.spdk:host5",
00:25:54.625      "hdgst": false,
00:25:54.625      "ddgst": false
00:25:54.625    },
00:25:54.625    "method": "bdev_nvme_attach_controller"
00:25:54.625  },{
00:25:54.625    "params": {
00:25:54.625      "name": "Nvme6",
00:25:54.625      "trtype": "tcp",
00:25:54.625      "traddr": "10.0.0.2",
00:25:54.625      "adrfam": "ipv4",
00:25:54.625      "trsvcid": "4420",
00:25:54.625      "subnqn": "nqn.2016-06.io.spdk:cnode6",
00:25:54.625      "hostnqn": "nqn.2016-06.io.spdk:host6",
00:25:54.625      "hdgst": false,
00:25:54.625      "ddgst": false
00:25:54.625    },
00:25:54.625    "method": "bdev_nvme_attach_controller"
00:25:54.625  },{
00:25:54.625    "params": {
00:25:54.625      "name": "Nvme7",
00:25:54.625      "trtype": "tcp",
00:25:54.625      "traddr": "10.0.0.2",
00:25:54.625      "adrfam": "ipv4",
00:25:54.625      "trsvcid": "4420",
00:25:54.625      "subnqn": "nqn.2016-06.io.spdk:cnode7",
00:25:54.625      "hostnqn": "nqn.2016-06.io.spdk:host7",
00:25:54.625      "hdgst": false,
00:25:54.625      "ddgst": false
00:25:54.625    },
00:25:54.625    "method": "bdev_nvme_attach_controller"
00:25:54.625  },{
00:25:54.625    "params": {
00:25:54.625      "name": "Nvme8",
00:25:54.625      "trtype": "tcp",
00:25:54.625      "traddr": "10.0.0.2",
00:25:54.625      "adrfam": "ipv4",
00:25:54.625      "trsvcid": "4420",
00:25:54.625      "subnqn": "nqn.2016-06.io.spdk:cnode8",
00:25:54.625      "hostnqn": "nqn.2016-06.io.spdk:host8",
00:25:54.625      "hdgst": false,
00:25:54.625      "ddgst": false
00:25:54.625    },
00:25:54.625    "method": "bdev_nvme_attach_controller"
00:25:54.625  },{
00:25:54.625    "params": {
00:25:54.625      "name": "Nvme9",
00:25:54.625      "trtype": "tcp",
00:25:54.625      "traddr": "10.0.0.2",
00:25:54.625      "adrfam": "ipv4",
00:25:54.625      "trsvcid": "4420",
00:25:54.625      "subnqn": "nqn.2016-06.io.spdk:cnode9",
00:25:54.625      "hostnqn": "nqn.2016-06.io.spdk:host9",
00:25:54.625      "hdgst": false,
00:25:54.625      "ddgst": false
00:25:54.625    },
00:25:54.625    "method": "bdev_nvme_attach_controller"
00:25:54.625  },{
00:25:54.625    "params": {
00:25:54.625      "name": "Nvme10",
00:25:54.625      "trtype": "tcp",
00:25:54.625      "traddr": "10.0.0.2",
00:25:54.625      "adrfam": "ipv4",
00:25:54.625      "trsvcid": "4420",
00:25:54.625      "subnqn": "nqn.2016-06.io.spdk:cnode10",
00:25:54.625      "hostnqn": "nqn.2016-06.io.spdk:host10",
00:25:54.625      "hdgst": false,
00:25:54.625      "ddgst": false
00:25:54.625    },
00:25:54.625    "method": "bdev_nvme_attach_controller"
00:25:54.625  }'
00:25:54.625  [2024-12-10 00:07:10.308110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:25:54.625  [2024-12-10 00:07:10.347673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:25:56.052  Running I/O for 1 seconds...
00:25:57.277       2247.00 IOPS,   140.44 MiB/s
00:25:57.277                                                                                                  Latency(us)
00:25:57.277  
[2024-12-09T23:07:13.134Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:25:57.277  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:25:57.277  	 Verification LBA range: start 0x0 length 0x400
00:25:57.277  	 Nvme1n1             :       1.08     241.73      15.11       0.00     0.00  261553.82    4587.52  222697.57
00:25:57.277  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:25:57.277  	 Verification LBA range: start 0x0 length 0x400
00:25:57.277  	 Nvme2n1             :       1.14     284.17      17.76       0.00     0.00  218357.49   10735.42  211712.49
00:25:57.277  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:25:57.277  	 Verification LBA range: start 0x0 length 0x400
00:25:57.277  	 Nvme3n1             :       1.12     286.38      17.90       0.00     0.00  215283.42   13481.69  210713.84
00:25:57.277  Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:25:57.277  	 Verification LBA range: start 0x0 length 0x400
00:25:57.277  	 Nvme4n1             :       1.13     283.41      17.71       0.00     0.00  214491.92   19598.38  217704.35
00:25:57.277  Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:25:57.277  	 Verification LBA range: start 0x0 length 0x400
00:25:57.277  	 Nvme5n1             :       1.14     285.30      17.83       0.00     0.00  209069.49    7240.17  212711.13
00:25:57.277  Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:25:57.277  	 Verification LBA range: start 0x0 length 0x400
00:25:57.277  	 Nvme6n1             :       1.15     278.73      17.42       0.00     0.00  211961.37   17226.61  229688.08
00:25:57.277  Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:25:57.277  	 Verification LBA range: start 0x0 length 0x400
00:25:57.277  	 Nvme7n1             :       1.14     281.76      17.61       0.00     0.00  206312.89   15853.47  211712.49
00:25:57.277  Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:25:57.277  	 Verification LBA range: start 0x0 length 0x400
00:25:57.277  	 Nvme8n1             :       1.14     279.92      17.49       0.00     0.00  204692.04   13232.03  213709.78
00:25:57.277  Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:25:57.277  	 Verification LBA range: start 0x0 length 0x400
00:25:57.277  	 Nvme9n1             :       1.16     276.73      17.30       0.00     0.00  203412.63   24341.94  226692.14
00:25:57.277  Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:25:57.277  	 Verification LBA range: start 0x0 length 0x400
00:25:57.277  	 Nvme10n1            :       1.15     277.49      17.34       0.00     0.00  200481.26   17101.78  232684.01
00:25:57.277  
[2024-12-09T23:07:13.134Z]  ===================================================================================================================
00:25:57.277  
[2024-12-09T23:07:13.134Z]  Total                       :               2775.62     173.48       0.00     0.00  213660.87    4587.52  232684.01
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20}
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:25:57.536  rmmod nvme_tcp
00:25:57.536  rmmod nvme_fabrics
00:25:57.536  rmmod nvme_keyring
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3147022 ']'
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3147022
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3147022 ']'
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3147022
00:25:57.536    00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:25:57.536    00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3147022
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3147022'
00:25:57.536  killing process with pid 3147022
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3147022
00:25:57.536   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3147022
00:25:58.104   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:25:58.104   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:25:58.104   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:25:58.104   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr
00:25:58.104   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save
00:25:58.104   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:25:58.104   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore
00:25:58.104   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:25:58.104   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns
00:25:58.104   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:25:58.104   00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:25:58.104    00:07:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:00.009   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:26:00.009  
00:26:00.009  real	0m15.340s
00:26:00.009  user	0m34.399s
00:26:00.009  sys	0m5.807s
00:26:00.009   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:00.009   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x
00:26:00.009  ************************************
00:26:00.009  END TEST nvmf_shutdown_tc1
00:26:00.009  ************************************
00:26:00.009   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2
00:26:00.009   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:26:00.010  ************************************
00:26:00.010  START TEST nvmf_shutdown_tc2
00:26:00.010  ************************************
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:00.010    00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=()
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=()
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=()
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=()
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=()
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=()
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=()
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:26:00.010  Found 0000:af:00.0 (0x8086 - 0x159b)
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:26:00.010  Found 0000:af:00.1 (0x8086 - 0x159b)
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:26:00.010  Found net devices under 0000:af:00.0: cvl_0_0
00:26:00.010   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:26:00.011  Found net devices under 0000:af:00.1: cvl_0_1
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:26:00.011   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:26:00.272   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:26:00.272   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:26:00.272   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:26:00.272   00:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:26:00.272  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:26:00.272  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms
00:26:00.272  
00:26:00.272  --- 10.0.0.2 ping statistics ---
00:26:00.272  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:00.272  rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:26:00.272  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:26:00.272  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms
00:26:00.272  
00:26:00.272  --- 10.0.0.1 ping statistics ---
00:26:00.272  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:00.272  rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:26:00.272   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:26:00.532   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E
00:26:00.532   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:26:00.532   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:00.532   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:00.532   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3148785
00:26:00.532   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3148785
00:26:00.532   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:26:00.532   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3148785 ']'
00:26:00.532   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:00.532   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:00.532   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:00.532  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:00.532   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:00.532   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:00.532  [2024-12-10 00:07:16.208061] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:26:00.532  [2024-12-10 00:07:16.208108] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:00.532  [2024-12-10 00:07:16.285398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:26:00.532  [2024-12-10 00:07:16.326519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:00.532  [2024-12-10 00:07:16.326558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:00.532  [2024-12-10 00:07:16.326566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:26:00.532  [2024-12-10 00:07:16.326572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:26:00.532  [2024-12-10 00:07:16.326577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:00.532  [2024-12-10 00:07:16.331184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:26:00.533  [2024-12-10 00:07:16.331296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:26:00.533  [2024-12-10 00:07:16.331417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:26:00.533  [2024-12-10 00:07:16.331417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:00.792  [2024-12-10 00:07:16.467529] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10})
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:00.792   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:00.792  Malloc1
00:26:00.792  [2024-12-10 00:07:16.580270] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:00.792  Malloc2
00:26:00.792  Malloc3
00:26:01.049  Malloc4
00:26:01.049  Malloc5
00:26:01.049  Malloc6
00:26:01.049  Malloc7
00:26:01.049  Malloc8
00:26:01.049  Malloc9
00:26:01.308  Malloc10
00:26:01.308   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:01.308   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems
00:26:01.308   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:01.308   00:07:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:01.308   00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3148869
00:26:01.308   00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3148869 /var/tmp/bdevperf.sock
00:26:01.308   00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3148869 ']'
00:26:01.308   00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:26:01.308   00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:26:01.308    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10
00:26:01.308   00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:01.308   00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:26:01.308  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:26:01.308    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=()
00:26:01.308   00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:01.308    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config
00:26:01.308   00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:01.308    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:01.308    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:01.308  {
00:26:01.308    "params": {
00:26:01.308      "name": "Nvme$subsystem",
00:26:01.308      "trtype": "$TEST_TRANSPORT",
00:26:01.308      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:01.308      "adrfam": "ipv4",
00:26:01.308      "trsvcid": "$NVMF_PORT",
00:26:01.308      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:01.308      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:01.308      "hdgst": ${hdgst:-false},
00:26:01.308      "ddgst": ${ddgst:-false}
00:26:01.308    },
00:26:01.308    "method": "bdev_nvme_attach_controller"
00:26:01.308  }
00:26:01.308  EOF
00:26:01.308  )")
00:26:01.308     00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:26:01.308    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:01.308    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:01.308  {
00:26:01.308    "params": {
00:26:01.308      "name": "Nvme$subsystem",
00:26:01.308      "trtype": "$TEST_TRANSPORT",
00:26:01.308      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:01.308      "adrfam": "ipv4",
00:26:01.308      "trsvcid": "$NVMF_PORT",
00:26:01.308      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:01.308      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:01.308      "hdgst": ${hdgst:-false},
00:26:01.308      "ddgst": ${ddgst:-false}
00:26:01.308    },
00:26:01.308    "method": "bdev_nvme_attach_controller"
00:26:01.308  }
00:26:01.308  EOF
00:26:01.308  )")
00:26:01.308     00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:26:01.308    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:01.308    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:01.308  {
00:26:01.308    "params": {
00:26:01.309      "name": "Nvme$subsystem",
00:26:01.309      "trtype": "$TEST_TRANSPORT",
00:26:01.309      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:01.309      "adrfam": "ipv4",
00:26:01.309      "trsvcid": "$NVMF_PORT",
00:26:01.309      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:01.309      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:01.309      "hdgst": ${hdgst:-false},
00:26:01.309      "ddgst": ${ddgst:-false}
00:26:01.309    },
00:26:01.309    "method": "bdev_nvme_attach_controller"
00:26:01.309  }
00:26:01.309  EOF
00:26:01.309  )")
00:26:01.309     00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:01.309  {
00:26:01.309    "params": {
00:26:01.309      "name": "Nvme$subsystem",
00:26:01.309      "trtype": "$TEST_TRANSPORT",
00:26:01.309      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:01.309      "adrfam": "ipv4",
00:26:01.309      "trsvcid": "$NVMF_PORT",
00:26:01.309      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:01.309      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:01.309      "hdgst": ${hdgst:-false},
00:26:01.309      "ddgst": ${ddgst:-false}
00:26:01.309    },
00:26:01.309    "method": "bdev_nvme_attach_controller"
00:26:01.309  }
00:26:01.309  EOF
00:26:01.309  )")
00:26:01.309     00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:01.309  {
00:26:01.309    "params": {
00:26:01.309      "name": "Nvme$subsystem",
00:26:01.309      "trtype": "$TEST_TRANSPORT",
00:26:01.309      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:01.309      "adrfam": "ipv4",
00:26:01.309      "trsvcid": "$NVMF_PORT",
00:26:01.309      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:01.309      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:01.309      "hdgst": ${hdgst:-false},
00:26:01.309      "ddgst": ${ddgst:-false}
00:26:01.309    },
00:26:01.309    "method": "bdev_nvme_attach_controller"
00:26:01.309  }
00:26:01.309  EOF
00:26:01.309  )")
00:26:01.309     00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:01.309  {
00:26:01.309    "params": {
00:26:01.309      "name": "Nvme$subsystem",
00:26:01.309      "trtype": "$TEST_TRANSPORT",
00:26:01.309      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:01.309      "adrfam": "ipv4",
00:26:01.309      "trsvcid": "$NVMF_PORT",
00:26:01.309      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:01.309      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:01.309      "hdgst": ${hdgst:-false},
00:26:01.309      "ddgst": ${ddgst:-false}
00:26:01.309    },
00:26:01.309    "method": "bdev_nvme_attach_controller"
00:26:01.309  }
00:26:01.309  EOF
00:26:01.309  )")
00:26:01.309     00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:01.309  {
00:26:01.309    "params": {
00:26:01.309      "name": "Nvme$subsystem",
00:26:01.309      "trtype": "$TEST_TRANSPORT",
00:26:01.309      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:01.309      "adrfam": "ipv4",
00:26:01.309      "trsvcid": "$NVMF_PORT",
00:26:01.309      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:01.309      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:01.309      "hdgst": ${hdgst:-false},
00:26:01.309      "ddgst": ${ddgst:-false}
00:26:01.309    },
00:26:01.309    "method": "bdev_nvme_attach_controller"
00:26:01.309  }
00:26:01.309  EOF
00:26:01.309  )")
00:26:01.309     00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:26:01.309  [2024-12-10 00:07:17.054279] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:26:01.309  [2024-12-10 00:07:17.054331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3148869 ]
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:01.309  {
00:26:01.309    "params": {
00:26:01.309      "name": "Nvme$subsystem",
00:26:01.309      "trtype": "$TEST_TRANSPORT",
00:26:01.309      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:01.309      "adrfam": "ipv4",
00:26:01.309      "trsvcid": "$NVMF_PORT",
00:26:01.309      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:01.309      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:01.309      "hdgst": ${hdgst:-false},
00:26:01.309      "ddgst": ${ddgst:-false}
00:26:01.309    },
00:26:01.309    "method": "bdev_nvme_attach_controller"
00:26:01.309  }
00:26:01.309  EOF
00:26:01.309  )")
00:26:01.309     00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:01.309  {
00:26:01.309    "params": {
00:26:01.309      "name": "Nvme$subsystem",
00:26:01.309      "trtype": "$TEST_TRANSPORT",
00:26:01.309      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:01.309      "adrfam": "ipv4",
00:26:01.309      "trsvcid": "$NVMF_PORT",
00:26:01.309      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:01.309      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:01.309      "hdgst": ${hdgst:-false},
00:26:01.309      "ddgst": ${ddgst:-false}
00:26:01.309    },
00:26:01.309    "method": "bdev_nvme_attach_controller"
00:26:01.309  }
00:26:01.309  EOF
00:26:01.309  )")
00:26:01.309     00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:01.309  {
00:26:01.309    "params": {
00:26:01.309      "name": "Nvme$subsystem",
00:26:01.309      "trtype": "$TEST_TRANSPORT",
00:26:01.309      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:01.309      "adrfam": "ipv4",
00:26:01.309      "trsvcid": "$NVMF_PORT",
00:26:01.309      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:01.309      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:01.309      "hdgst": ${hdgst:-false},
00:26:01.309      "ddgst": ${ddgst:-false}
00:26:01.309    },
00:26:01.309    "method": "bdev_nvme_attach_controller"
00:26:01.309  }
00:26:01.309  EOF
00:26:01.309  )")
00:26:01.309     00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat
00:26:01.309    00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq .
00:26:01.309     00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=,
00:26:01.309     00:07:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:26:01.309    "params": {
00:26:01.309      "name": "Nvme1",
00:26:01.309      "trtype": "tcp",
00:26:01.309      "traddr": "10.0.0.2",
00:26:01.309      "adrfam": "ipv4",
00:26:01.309      "trsvcid": "4420",
00:26:01.309      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:26:01.309      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:26:01.309      "hdgst": false,
00:26:01.309      "ddgst": false
00:26:01.309    },
00:26:01.309    "method": "bdev_nvme_attach_controller"
00:26:01.309  },{
00:26:01.309    "params": {
00:26:01.309      "name": "Nvme2",
00:26:01.309      "trtype": "tcp",
00:26:01.309      "traddr": "10.0.0.2",
00:26:01.309      "adrfam": "ipv4",
00:26:01.309      "trsvcid": "4420",
00:26:01.309      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:26:01.309      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:26:01.309      "hdgst": false,
00:26:01.309      "ddgst": false
00:26:01.309    },
00:26:01.309    "method": "bdev_nvme_attach_controller"
00:26:01.309  },{
00:26:01.309    "params": {
00:26:01.309      "name": "Nvme3",
00:26:01.309      "trtype": "tcp",
00:26:01.309      "traddr": "10.0.0.2",
00:26:01.309      "adrfam": "ipv4",
00:26:01.309      "trsvcid": "4420",
00:26:01.310      "subnqn": "nqn.2016-06.io.spdk:cnode3",
00:26:01.310      "hostnqn": "nqn.2016-06.io.spdk:host3",
00:26:01.310      "hdgst": false,
00:26:01.310      "ddgst": false
00:26:01.310    },
00:26:01.310    "method": "bdev_nvme_attach_controller"
00:26:01.310  },{
00:26:01.310    "params": {
00:26:01.310      "name": "Nvme4",
00:26:01.310      "trtype": "tcp",
00:26:01.310      "traddr": "10.0.0.2",
00:26:01.310      "adrfam": "ipv4",
00:26:01.310      "trsvcid": "4420",
00:26:01.310      "subnqn": "nqn.2016-06.io.spdk:cnode4",
00:26:01.310      "hostnqn": "nqn.2016-06.io.spdk:host4",
00:26:01.310      "hdgst": false,
00:26:01.310      "ddgst": false
00:26:01.310    },
00:26:01.310    "method": "bdev_nvme_attach_controller"
00:26:01.310  },{
00:26:01.310    "params": {
00:26:01.310      "name": "Nvme5",
00:26:01.310      "trtype": "tcp",
00:26:01.310      "traddr": "10.0.0.2",
00:26:01.310      "adrfam": "ipv4",
00:26:01.310      "trsvcid": "4420",
00:26:01.310      "subnqn": "nqn.2016-06.io.spdk:cnode5",
00:26:01.310      "hostnqn": "nqn.2016-06.io.spdk:host5",
00:26:01.310      "hdgst": false,
00:26:01.310      "ddgst": false
00:26:01.310    },
00:26:01.310    "method": "bdev_nvme_attach_controller"
00:26:01.310  },{
00:26:01.310    "params": {
00:26:01.310      "name": "Nvme6",
00:26:01.310      "trtype": "tcp",
00:26:01.310      "traddr": "10.0.0.2",
00:26:01.310      "adrfam": "ipv4",
00:26:01.310      "trsvcid": "4420",
00:26:01.310      "subnqn": "nqn.2016-06.io.spdk:cnode6",
00:26:01.310      "hostnqn": "nqn.2016-06.io.spdk:host6",
00:26:01.310      "hdgst": false,
00:26:01.310      "ddgst": false
00:26:01.310    },
00:26:01.310    "method": "bdev_nvme_attach_controller"
00:26:01.310  },{
00:26:01.310    "params": {
00:26:01.310      "name": "Nvme7",
00:26:01.310      "trtype": "tcp",
00:26:01.310      "traddr": "10.0.0.2",
00:26:01.310      "adrfam": "ipv4",
00:26:01.310      "trsvcid": "4420",
00:26:01.310      "subnqn": "nqn.2016-06.io.spdk:cnode7",
00:26:01.310      "hostnqn": "nqn.2016-06.io.spdk:host7",
00:26:01.310      "hdgst": false,
00:26:01.310      "ddgst": false
00:26:01.310    },
00:26:01.310    "method": "bdev_nvme_attach_controller"
00:26:01.310  },{
00:26:01.310    "params": {
00:26:01.310      "name": "Nvme8",
00:26:01.310      "trtype": "tcp",
00:26:01.310      "traddr": "10.0.0.2",
00:26:01.310      "adrfam": "ipv4",
00:26:01.310      "trsvcid": "4420",
00:26:01.310      "subnqn": "nqn.2016-06.io.spdk:cnode8",
00:26:01.310      "hostnqn": "nqn.2016-06.io.spdk:host8",
00:26:01.310      "hdgst": false,
00:26:01.310      "ddgst": false
00:26:01.310    },
00:26:01.310    "method": "bdev_nvme_attach_controller"
00:26:01.310  },{
00:26:01.310    "params": {
00:26:01.310      "name": "Nvme9",
00:26:01.310      "trtype": "tcp",
00:26:01.310      "traddr": "10.0.0.2",
00:26:01.310      "adrfam": "ipv4",
00:26:01.310      "trsvcid": "4420",
00:26:01.310      "subnqn": "nqn.2016-06.io.spdk:cnode9",
00:26:01.310      "hostnqn": "nqn.2016-06.io.spdk:host9",
00:26:01.310      "hdgst": false,
00:26:01.310      "ddgst": false
00:26:01.310    },
00:26:01.310    "method": "bdev_nvme_attach_controller"
00:26:01.310  },{
00:26:01.310    "params": {
00:26:01.310      "name": "Nvme10",
00:26:01.310      "trtype": "tcp",
00:26:01.310      "traddr": "10.0.0.2",
00:26:01.310      "adrfam": "ipv4",
00:26:01.310      "trsvcid": "4420",
00:26:01.310      "subnqn": "nqn.2016-06.io.spdk:cnode10",
00:26:01.310      "hostnqn": "nqn.2016-06.io.spdk:host10",
00:26:01.310      "hdgst": false,
00:26:01.310      "ddgst": false
00:26:01.310    },
00:26:01.310    "method": "bdev_nvme_attach_controller"
00:26:01.310  }'
00:26:01.310  [2024-12-10 00:07:17.133464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:01.569  [2024-12-10 00:07:17.173725] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:26:03.471  Running I/O for 10 seconds...
00:26:03.471   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:03.471   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0
00:26:03.471   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:26:03.471   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:03.471   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:03.471   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:03.471   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1
00:26:03.471   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:26:03.471   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']'
00:26:03.471   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1
00:26:03.471   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i
00:26:03.471   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 ))
00:26:03.472   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:26:03.472    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:26:03.472    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:26:03.472    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:03.472    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:03.472    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:03.472   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3
00:26:03.472   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']'
00:26:03.472   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25
00:26:03.731   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- ))
00:26:03.731   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:26:03.731    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:26:03.731    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:26:03.731    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:03.731    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:03.731    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:03.731   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67
00:26:03.731   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']'
00:26:03.731   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- ))
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:26:03.990    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:26:03.990    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:26:03.990    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:03.990    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:03.990    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']'
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3148869
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3148869 ']'
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3148869
00:26:03.990    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:03.990    00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3148869
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:26:03.990   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3148869'
00:26:03.990  killing process with pid 3148869
00:26:03.991   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3148869
00:26:03.991   00:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3148869
00:26:04.249  Received shutdown signal, test time was about 0.991221 seconds
00:26:04.249  
00:26:04.249                                                                                                  Latency(us)
00:26:04.249  
[2024-12-09T23:07:20.106Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:26:04.249  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:04.249  	 Verification LBA range: start 0x0 length 0x400
00:26:04.249  	 Nvme1n1             :       0.98     262.20      16.39       0.00     0.00  241426.29   18350.08  214708.42
00:26:04.249  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:04.249  	 Verification LBA range: start 0x0 length 0x400
00:26:04.249  	 Nvme2n1             :       0.98     261.32      16.33       0.00     0.00  238443.03   16852.11  216705.71
00:26:04.249  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:04.249  	 Verification LBA range: start 0x0 length 0x400
00:26:04.249  	 Nvme3n1             :       0.99     323.65      20.23       0.00     0.00  189373.68   17725.93  217704.35
00:26:04.249  Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:04.249  	 Verification LBA range: start 0x0 length 0x400
00:26:04.249  	 Nvme4n1             :       0.99     323.05      20.19       0.00     0.00  186011.84   11359.57  214708.42
00:26:04.249  Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:04.249  	 Verification LBA range: start 0x0 length 0x400
00:26:04.249  	 Nvme5n1             :       0.97     289.29      18.08       0.00     0.00  201687.07   13481.69  211712.49
00:26:04.249  Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:04.249  	 Verification LBA range: start 0x0 length 0x400
00:26:04.249  	 Nvme6n1             :       0.96     267.58      16.72       0.00     0.00  216950.98   29335.16  198730.12
00:26:04.249  Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:04.249  	 Verification LBA range: start 0x0 length 0x400
00:26:04.249  	 Nvme7n1             :       0.96     266.49      16.66       0.00     0.00  213699.29   14979.66  212711.13
00:26:04.249  Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:04.249  	 Verification LBA range: start 0x0 length 0x400
00:26:04.249  	 Nvme8n1             :       0.97     264.72      16.54       0.00     0.00  212077.59   13419.28  218702.99
00:26:04.249  Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:04.249  	 Verification LBA range: start 0x0 length 0x400
00:26:04.249  	 Nvme9n1             :       0.98     260.26      16.27       0.00     0.00  212325.91   19099.06  219701.64
00:26:04.249  Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:04.249  	 Verification LBA range: start 0x0 length 0x400
00:26:04.249  	 Nvme10n1            :       0.99     259.48      16.22       0.00     0.00  209306.58   17101.78  234681.30
00:26:04.249  
[2024-12-09T23:07:20.106Z]  ===================================================================================================================
00:26:04.249  
[2024-12-09T23:07:20.106Z]  Total                       :               2778.04     173.63       0.00     0.00  210881.03   11359.57  234681.30
00:26:04.249   00:07:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1
00:26:05.626   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3148785
00:26:05.626   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget
00:26:05.626   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state
00:26:05.626   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20}
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:26:05.627  rmmod nvme_tcp
00:26:05.627  rmmod nvme_fabrics
00:26:05.627  rmmod nvme_keyring
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3148785 ']'
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3148785
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3148785 ']'
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3148785
00:26:05.627    00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:05.627    00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3148785
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3148785'
00:26:05.627  killing process with pid 3148785
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3148785
00:26:05.627   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3148785
00:26:05.886   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:26:05.886   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:26:05.886   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:26:05.886   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr
00:26:05.886   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save
00:26:05.886   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:26:05.886   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore
00:26:05.886   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:26:05.886   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns
00:26:05.886   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:05.886   00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:05.886    00:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:07.791   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:26:07.791  
00:26:07.791  real	0m7.805s
00:26:07.791  user	0m23.707s
00:26:07.791  sys	0m1.431s
00:26:07.791   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:07.791   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x
00:26:07.791  ************************************
00:26:07.791  END TEST nvmf_shutdown_tc2
00:26:07.791  ************************************
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:26:08.051  ************************************
00:26:08.051  START TEST nvmf_shutdown_tc3
00:26:08.051  ************************************
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:08.051    00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=()
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=()
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=()
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=()
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=()
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=()
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=()
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:26:08.051  Found 0000:af:00.0 (0x8086 - 0x159b)
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:26:08.051  Found 0000:af:00.1 (0x8086 - 0x159b)
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:08.051   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:26:08.052  Found net devices under 0000:af:00.0: cvl_0_0
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:26:08.052  Found net devices under 0000:af:00.1: cvl_0_1
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:26:08.052   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:26:08.310  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:26:08.310  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms
00:26:08.310  
00:26:08.310  --- 10.0.0.2 ping statistics ---
00:26:08.310  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:08.310  rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:26:08.310  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:26:08.310  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms
00:26:08.310  
00:26:08.310  --- 10.0.0.1 ping statistics ---
00:26:08.310  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:08.310  rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:26:08.310   00:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:26:08.310   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E
00:26:08.310   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:26:08.310   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:08.310   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:26:08.310   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3150135
00:26:08.311   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3150135
00:26:08.311   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:26:08.311   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3150135 ']'
00:26:08.311   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:08.311   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:08.311   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:08.311  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:08.311   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:08.311   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:26:08.311  [2024-12-10 00:07:24.066200] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:26:08.311  [2024-12-10 00:07:24.066251] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:08.311  [2024-12-10 00:07:24.143047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:26:08.569  [2024-12-10 00:07:24.185119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:08.569  [2024-12-10 00:07:24.185156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:08.569  [2024-12-10 00:07:24.185164] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:26:08.569  [2024-12-10 00:07:24.185175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:26:08.569  [2024-12-10 00:07:24.185180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:08.569  [2024-12-10 00:07:24.186686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:26:08.569  [2024-12-10 00:07:24.186799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:26:08.569  [2024-12-10 00:07:24.186906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:26:08.569  [2024-12-10 00:07:24.186907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:26:08.569   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:08.569   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0
00:26:08.569   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:26:08.569   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:26:08.570  [2024-12-10 00:07:24.332219] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10})
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:08.570   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:26:08.570  Malloc1
00:26:08.828  [2024-12-10 00:07:24.442138] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:08.828  Malloc2
00:26:08.828  Malloc3
00:26:08.828  Malloc4
00:26:08.828  Malloc5
00:26:08.828  Malloc6
00:26:08.828  Malloc7
00:26:09.088  Malloc8
00:26:09.088  Malloc9
00:26:09.088  Malloc10
00:26:09.088   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:09.088   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems
00:26:09.088   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:09.088   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:26:09.088   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3150359
00:26:09.088   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3150359 /var/tmp/bdevperf.sock
00:26:09.088   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3150359 ']'
00:26:09.088   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:26:09.088   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:09.088   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:26:09.088  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:26:09.088   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:09.088   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:26:09.088   00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=()
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:09.088  {
00:26:09.088    "params": {
00:26:09.088      "name": "Nvme$subsystem",
00:26:09.088      "trtype": "$TEST_TRANSPORT",
00:26:09.088      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:09.088      "adrfam": "ipv4",
00:26:09.088      "trsvcid": "$NVMF_PORT",
00:26:09.088      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:09.088      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:09.088      "hdgst": ${hdgst:-false},
00:26:09.088      "ddgst": ${ddgst:-false}
00:26:09.088    },
00:26:09.088    "method": "bdev_nvme_attach_controller"
00:26:09.088  }
00:26:09.088  EOF
00:26:09.088  )")
00:26:09.088     00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:09.088  {
00:26:09.088    "params": {
00:26:09.088      "name": "Nvme$subsystem",
00:26:09.088      "trtype": "$TEST_TRANSPORT",
00:26:09.088      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:09.088      "adrfam": "ipv4",
00:26:09.088      "trsvcid": "$NVMF_PORT",
00:26:09.088      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:09.088      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:09.088      "hdgst": ${hdgst:-false},
00:26:09.088      "ddgst": ${ddgst:-false}
00:26:09.088    },
00:26:09.088    "method": "bdev_nvme_attach_controller"
00:26:09.088  }
00:26:09.088  EOF
00:26:09.088  )")
00:26:09.088     00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:09.088  {
00:26:09.088    "params": {
00:26:09.088      "name": "Nvme$subsystem",
00:26:09.088      "trtype": "$TEST_TRANSPORT",
00:26:09.088      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:09.088      "adrfam": "ipv4",
00:26:09.088      "trsvcid": "$NVMF_PORT",
00:26:09.088      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:09.088      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:09.088      "hdgst": ${hdgst:-false},
00:26:09.088      "ddgst": ${ddgst:-false}
00:26:09.088    },
00:26:09.088    "method": "bdev_nvme_attach_controller"
00:26:09.088  }
00:26:09.088  EOF
00:26:09.088  )")
00:26:09.088     00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:09.088  {
00:26:09.088    "params": {
00:26:09.088      "name": "Nvme$subsystem",
00:26:09.088      "trtype": "$TEST_TRANSPORT",
00:26:09.088      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:09.088      "adrfam": "ipv4",
00:26:09.088      "trsvcid": "$NVMF_PORT",
00:26:09.088      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:09.088      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:09.088      "hdgst": ${hdgst:-false},
00:26:09.088      "ddgst": ${ddgst:-false}
00:26:09.088    },
00:26:09.088    "method": "bdev_nvme_attach_controller"
00:26:09.088  }
00:26:09.088  EOF
00:26:09.088  )")
00:26:09.088     00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:09.088  {
00:26:09.088    "params": {
00:26:09.088      "name": "Nvme$subsystem",
00:26:09.088      "trtype": "$TEST_TRANSPORT",
00:26:09.088      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:09.088      "adrfam": "ipv4",
00:26:09.088      "trsvcid": "$NVMF_PORT",
00:26:09.088      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:09.088      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:09.088      "hdgst": ${hdgst:-false},
00:26:09.088      "ddgst": ${ddgst:-false}
00:26:09.088    },
00:26:09.088    "method": "bdev_nvme_attach_controller"
00:26:09.088  }
00:26:09.088  EOF
00:26:09.088  )")
00:26:09.088     00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:09.088  {
00:26:09.088    "params": {
00:26:09.088      "name": "Nvme$subsystem",
00:26:09.088      "trtype": "$TEST_TRANSPORT",
00:26:09.088      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:09.088      "adrfam": "ipv4",
00:26:09.088      "trsvcid": "$NVMF_PORT",
00:26:09.088      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:09.088      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:09.088      "hdgst": ${hdgst:-false},
00:26:09.088      "ddgst": ${ddgst:-false}
00:26:09.088    },
00:26:09.088    "method": "bdev_nvme_attach_controller"
00:26:09.088  }
00:26:09.088  EOF
00:26:09.088  )")
00:26:09.088     00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:09.088    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:09.088  {
00:26:09.088    "params": {
00:26:09.088      "name": "Nvme$subsystem",
00:26:09.088      "trtype": "$TEST_TRANSPORT",
00:26:09.088      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:09.088      "adrfam": "ipv4",
00:26:09.088      "trsvcid": "$NVMF_PORT",
00:26:09.088      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:09.088      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:09.088      "hdgst": ${hdgst:-false},
00:26:09.088      "ddgst": ${ddgst:-false}
00:26:09.088    },
00:26:09.088    "method": "bdev_nvme_attach_controller"
00:26:09.088  }
00:26:09.088  EOF
00:26:09.088  )")
00:26:09.089     00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:26:09.089  [2024-12-10 00:07:24.917932] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:26:09.089  [2024-12-10 00:07:24.917977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3150359 ]
00:26:09.089    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:09.089    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:09.089  {
00:26:09.089    "params": {
00:26:09.089      "name": "Nvme$subsystem",
00:26:09.089      "trtype": "$TEST_TRANSPORT",
00:26:09.089      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:09.089      "adrfam": "ipv4",
00:26:09.089      "trsvcid": "$NVMF_PORT",
00:26:09.089      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:09.089      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:09.089      "hdgst": ${hdgst:-false},
00:26:09.089      "ddgst": ${ddgst:-false}
00:26:09.089    },
00:26:09.089    "method": "bdev_nvme_attach_controller"
00:26:09.089  }
00:26:09.089  EOF
00:26:09.089  )")
00:26:09.089     00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:26:09.089    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:09.089    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:09.089  {
00:26:09.089    "params": {
00:26:09.089      "name": "Nvme$subsystem",
00:26:09.089      "trtype": "$TEST_TRANSPORT",
00:26:09.089      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:09.089      "adrfam": "ipv4",
00:26:09.089      "trsvcid": "$NVMF_PORT",
00:26:09.089      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:09.089      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:09.089      "hdgst": ${hdgst:-false},
00:26:09.089      "ddgst": ${ddgst:-false}
00:26:09.089    },
00:26:09.089    "method": "bdev_nvme_attach_controller"
00:26:09.089  }
00:26:09.089  EOF
00:26:09.089  )")
00:26:09.089     00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:26:09.089    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:26:09.089    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:26:09.089  {
00:26:09.089    "params": {
00:26:09.089      "name": "Nvme$subsystem",
00:26:09.089      "trtype": "$TEST_TRANSPORT",
00:26:09.089      "traddr": "$NVMF_FIRST_TARGET_IP",
00:26:09.089      "adrfam": "ipv4",
00:26:09.089      "trsvcid": "$NVMF_PORT",
00:26:09.089      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:26:09.089      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:26:09.089      "hdgst": ${hdgst:-false},
00:26:09.089      "ddgst": ${ddgst:-false}
00:26:09.089    },
00:26:09.089    "method": "bdev_nvme_attach_controller"
00:26:09.089  }
00:26:09.089  EOF
00:26:09.089  )")
00:26:09.089     00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat
00:26:09.089    00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq .
00:26:09.348     00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=,
00:26:09.348     00:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:26:09.348    "params": {
00:26:09.348      "name": "Nvme1",
00:26:09.348      "trtype": "tcp",
00:26:09.348      "traddr": "10.0.0.2",
00:26:09.348      "adrfam": "ipv4",
00:26:09.348      "trsvcid": "4420",
00:26:09.348      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:26:09.348      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:26:09.348      "hdgst": false,
00:26:09.348      "ddgst": false
00:26:09.348    },
00:26:09.348    "method": "bdev_nvme_attach_controller"
00:26:09.348  },{
00:26:09.348    "params": {
00:26:09.348      "name": "Nvme2",
00:26:09.348      "trtype": "tcp",
00:26:09.348      "traddr": "10.0.0.2",
00:26:09.348      "adrfam": "ipv4",
00:26:09.348      "trsvcid": "4420",
00:26:09.348      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:26:09.348      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:26:09.348      "hdgst": false,
00:26:09.348      "ddgst": false
00:26:09.348    },
00:26:09.348    "method": "bdev_nvme_attach_controller"
00:26:09.348  },{
00:26:09.348    "params": {
00:26:09.348      "name": "Nvme3",
00:26:09.348      "trtype": "tcp",
00:26:09.348      "traddr": "10.0.0.2",
00:26:09.348      "adrfam": "ipv4",
00:26:09.348      "trsvcid": "4420",
00:26:09.348      "subnqn": "nqn.2016-06.io.spdk:cnode3",
00:26:09.348      "hostnqn": "nqn.2016-06.io.spdk:host3",
00:26:09.348      "hdgst": false,
00:26:09.348      "ddgst": false
00:26:09.348    },
00:26:09.348    "method": "bdev_nvme_attach_controller"
00:26:09.348  },{
00:26:09.348    "params": {
00:26:09.348      "name": "Nvme4",
00:26:09.348      "trtype": "tcp",
00:26:09.348      "traddr": "10.0.0.2",
00:26:09.348      "adrfam": "ipv4",
00:26:09.348      "trsvcid": "4420",
00:26:09.348      "subnqn": "nqn.2016-06.io.spdk:cnode4",
00:26:09.348      "hostnqn": "nqn.2016-06.io.spdk:host4",
00:26:09.348      "hdgst": false,
00:26:09.348      "ddgst": false
00:26:09.348    },
00:26:09.348    "method": "bdev_nvme_attach_controller"
00:26:09.348  },{
00:26:09.348    "params": {
00:26:09.348      "name": "Nvme5",
00:26:09.348      "trtype": "tcp",
00:26:09.348      "traddr": "10.0.0.2",
00:26:09.348      "adrfam": "ipv4",
00:26:09.348      "trsvcid": "4420",
00:26:09.348      "subnqn": "nqn.2016-06.io.spdk:cnode5",
00:26:09.348      "hostnqn": "nqn.2016-06.io.spdk:host5",
00:26:09.348      "hdgst": false,
00:26:09.348      "ddgst": false
00:26:09.348    },
00:26:09.348    "method": "bdev_nvme_attach_controller"
00:26:09.348  },{
00:26:09.348    "params": {
00:26:09.348      "name": "Nvme6",
00:26:09.348      "trtype": "tcp",
00:26:09.348      "traddr": "10.0.0.2",
00:26:09.348      "adrfam": "ipv4",
00:26:09.348      "trsvcid": "4420",
00:26:09.348      "subnqn": "nqn.2016-06.io.spdk:cnode6",
00:26:09.348      "hostnqn": "nqn.2016-06.io.spdk:host6",
00:26:09.348      "hdgst": false,
00:26:09.348      "ddgst": false
00:26:09.348    },
00:26:09.348    "method": "bdev_nvme_attach_controller"
00:26:09.348  },{
00:26:09.348    "params": {
00:26:09.348      "name": "Nvme7",
00:26:09.348      "trtype": "tcp",
00:26:09.348      "traddr": "10.0.0.2",
00:26:09.348      "adrfam": "ipv4",
00:26:09.348      "trsvcid": "4420",
00:26:09.348      "subnqn": "nqn.2016-06.io.spdk:cnode7",
00:26:09.348      "hostnqn": "nqn.2016-06.io.spdk:host7",
00:26:09.348      "hdgst": false,
00:26:09.348      "ddgst": false
00:26:09.348    },
00:26:09.348    "method": "bdev_nvme_attach_controller"
00:26:09.348  },{
00:26:09.348    "params": {
00:26:09.348      "name": "Nvme8",
00:26:09.348      "trtype": "tcp",
00:26:09.348      "traddr": "10.0.0.2",
00:26:09.348      "adrfam": "ipv4",
00:26:09.348      "trsvcid": "4420",
00:26:09.348      "subnqn": "nqn.2016-06.io.spdk:cnode8",
00:26:09.348      "hostnqn": "nqn.2016-06.io.spdk:host8",
00:26:09.348      "hdgst": false,
00:26:09.348      "ddgst": false
00:26:09.348    },
00:26:09.348    "method": "bdev_nvme_attach_controller"
00:26:09.348  },{
00:26:09.348    "params": {
00:26:09.348      "name": "Nvme9",
00:26:09.348      "trtype": "tcp",
00:26:09.348      "traddr": "10.0.0.2",
00:26:09.348      "adrfam": "ipv4",
00:26:09.348      "trsvcid": "4420",
00:26:09.348      "subnqn": "nqn.2016-06.io.spdk:cnode9",
00:26:09.348      "hostnqn": "nqn.2016-06.io.spdk:host9",
00:26:09.348      "hdgst": false,
00:26:09.348      "ddgst": false
00:26:09.348    },
00:26:09.348    "method": "bdev_nvme_attach_controller"
00:26:09.348  },{
00:26:09.348    "params": {
00:26:09.348      "name": "Nvme10",
00:26:09.348      "trtype": "tcp",
00:26:09.348      "traddr": "10.0.0.2",
00:26:09.348      "adrfam": "ipv4",
00:26:09.348      "trsvcid": "4420",
00:26:09.348      "subnqn": "nqn.2016-06.io.spdk:cnode10",
00:26:09.348      "hostnqn": "nqn.2016-06.io.spdk:host10",
00:26:09.348      "hdgst": false,
00:26:09.348      "ddgst": false
00:26:09.348    },
00:26:09.348    "method": "bdev_nvme_attach_controller"
00:26:09.348  }'
00:26:09.348  [2024-12-10 00:07:24.992325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:09.349  [2024-12-10 00:07:25.031802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:26:10.724  Running I/O for 10 seconds...
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']'
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 ))
00:26:10.983   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:26:10.983    00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:26:10.983    00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:26:10.983    00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:10.983    00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:26:11.242    00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:11.242   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=88
00:26:11.242   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 88 -ge 100 ']'
00:26:11.242   00:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25
00:26:11.521   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- ))
00:26:11.521   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 ))
00:26:11.521    00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1
00:26:11.521    00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops'
00:26:11.521    00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:11.521    00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:26:11.521    00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:11.521   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195
00:26:11.521   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']'
00:26:11.522   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0
00:26:11.522   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break
00:26:11.522   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0
00:26:11.522   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3150135
00:26:11.522   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3150135 ']'
00:26:11.522   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3150135
00:26:11.522    00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname
00:26:11.522   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:11.522    00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3150135
00:26:11.522   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:26:11.522   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:26:11.522   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3150135'
00:26:11.522  killing process with pid 3150135
00:26:11.522   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3150135
00:26:11.522   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3150135
00:26:11.522  [2024-12-10 00:07:27.209575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.522  [2024-12-10 00:07:27.209889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.209998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.210005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.210011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.210017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.210023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.210029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.210035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.210041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.210047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18840 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.523  [2024-12-10 00:07:27.212307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.212531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18d10 is same with the state(6) to be set
00:26:11.524  [2024-12-10 00:07:27.213129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.524  [2024-12-10 00:07:27.213163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.524  [2024-12-10 00:07:27.213187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.524  [2024-12-10 00:07:27.213195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.524  [2024-12-10 00:07:27.213203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.524  [2024-12-10 00:07:27.213210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.524  [2024-12-10 00:07:27.213219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.524  [2024-12-10 00:07:27.213226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.524  [2024-12-10 00:07:27.213233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.524  [2024-12-10 00:07:27.213240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.524  [2024-12-10 00:07:27.213248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.524  [2024-12-10 00:07:27.213254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.524  [2024-12-10 00:07:27.213262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.524  [2024-12-10 00:07:27.213268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.524  [2024-12-10 00:07:27.213277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.524  [2024-12-10 00:07:27.213284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.525  [2024-12-10 00:07:27.213615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.525  [2024-12-10 00:07:27.213623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.526  [2024-12-10 00:07:27.213980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.526  [2024-12-10 00:07:27.213988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.527  [2024-12-10 00:07:27.213994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.527  [2024-12-10 00:07:27.214008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.527  [2024-12-10 00:07:27.214022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.527  [2024-12-10 00:07:27.214037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.527  [2024-12-10 00:07:27.214051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.527  [2024-12-10 00:07:27.214065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.527  [2024-12-10 00:07:27.214081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.527  [2024-12-10 00:07:27.214095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.527  [2024-12-10 00:07:27.214110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:26:11.527  [2024-12-10 00:07:27.214255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3c000 is same with the state(6) to be set
00:26:11.527  [2024-12-10 00:07:27.214351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98370 is same with the state(6) to be set
00:26:11.527  [2024-12-10 00:07:27.214453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3c490 is same with the state(6) to be set
00:26:11.527  [2024-12-10 00:07:27.214539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.527  [2024-12-10 00:07:27.214555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.527  [2024-12-10 00:07:27.214562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.528  [2024-12-10 00:07:27.214569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.528  [2024-12-10 00:07:27.214575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.528  [2024-12-10 00:07:27.214584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.528  [2024-12-10 00:07:27.214590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.528  [2024-12-10 00:07:27.214596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa304d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.528  [2024-12-10 00:07:27.215393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.215483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc196d0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.529  [2024-12-10 00:07:27.216439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc19ba0 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.216821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller
00:26:11.530  [2024-12-10 00:07:27.216858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe98370 (9): Bad file descriptor
00:26:11.530  [2024-12-10 00:07:27.217812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.530  [2024-12-10 00:07:27.217919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.217925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.217931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.217938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.217945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.217957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.217963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.217969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.217975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.217981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.217987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.217993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.217999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a070 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.218499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.531  [2024-12-10 00:07:27.218524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe98370 with addr=10.0.0.2, port=4420
00:26:11.531  [2024-12-10 00:07:27.218533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98370 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.219386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.219413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.219421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.531  [2024-12-10 00:07:27.219427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219548] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a540 is same with the state(6) to be set
00:26:11.532  [2024-12-10 00:07:27.219936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe98370 (9): Bad file descriptor
00:26:11.533  [2024-12-10 00:07:27.220002] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:26:11.533  [2024-12-10 00:07:27.220046] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:26:11.533  [2024-12-10 00:07:27.220088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.533  [2024-12-10 00:07:27.220099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.533  [2024-12-10 00:07:27.220112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.533  [2024-12-10 00:07:27.220121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.533  [2024-12-10 00:07:27.220130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.533  [2024-12-10 00:07:27.220137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.533  [2024-12-10 00:07:27.220145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.533  [2024-12-10 00:07:27.220152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.533  [2024-12-10 00:07:27.220160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe13900 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in er[2024-12-10 00:07:27.220799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with tror state
00:26:11.533  he state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed
00:26:11.533  [2024-12-10 00:07:27.220817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state.
00:26:11.533  [2024-12-10 00:07:27.220824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.220831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed.
00:26:11.533  [2024-12-10 00:07:27.221575] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:26:11.533  [2024-12-10 00:07:27.222188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller
00:26:11.533  [2024-12-10 00:07:27.222217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3c000 (9): Bad file descriptor
00:26:11.533  [2024-12-10 00:07:27.222291] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:26:11.533  [2024-12-10 00:07:27.222903] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:26:11.533  [2024-12-10 00:07:27.223384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.533  [2024-12-10 00:07:27.223406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3c000 with addr=10.0.0.2, port=4420
00:26:11.533  [2024-12-10 00:07:27.223415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3c000 is same with the state(6) to be set
00:26:11.533  [2024-12-10 00:07:27.223495] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:26:11.533  [2024-12-10 00:07:27.223749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3c000 (9): Bad file descriptor
00:26:11.533  [2024-12-10 00:07:27.223903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state
00:26:11.533  [2024-12-10 00:07:27.223916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed
00:26:11.533  [2024-12-10 00:07:27.223924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state.
00:26:11.534  [2024-12-10 00:07:27.223931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed.
00:26:11.534  [2024-12-10 00:07:27.224331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea9c80 is same with the state(6) to be set
00:26:11.534  [2024-12-10 00:07:27.224432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951610 is same with the state(6) to be set
00:26:11.534  [2024-12-10 00:07:27.224514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30920 is same with the state(6) to be set
00:26:11.534  [2024-12-10 00:07:27.224597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe60e90 is same with the state(6) to be set
00:26:11.534  [2024-12-10 00:07:27.224669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3c490 (9): Bad file descriptor
00:26:11.534  [2024-12-10 00:07:27.224693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.534  [2024-12-10 00:07:27.224744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.534  [2024-12-10 00:07:27.224750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa302d0 is same with the state(6) to be set
00:26:11.534  [2024-12-10 00:07:27.224763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa304d0 (9): Bad file descriptor
00:26:11.534  [2024-12-10 00:07:27.227681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller
00:26:11.534  [2024-12-10 00:07:27.227874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.534  [2024-12-10 00:07:27.227891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe98370 with addr=10.0.0.2, port=4420
00:26:11.534  [2024-12-10 00:07:27.227900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98370 is same with the state(6) to be set
00:26:11.534  [2024-12-10 00:07:27.227975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe98370 (9): Bad file descriptor
00:26:11.534  [2024-12-10 00:07:27.228052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state
00:26:11.534  [2024-12-10 00:07:27.228064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed
00:26:11.534  [2024-12-10 00:07:27.228071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state.
00:26:11.534  [2024-12-10 00:07:27.228080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed.
00:26:11.535  [2024-12-10 00:07:27.232948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller
00:26:11.535  [2024-12-10 00:07:27.232992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with t[2024-12-10 00:07:27.233208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.535  he state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3c000 with addr=10.0.0.2, port=4420
00:26:11.535  [2024-12-10 00:07:27.233230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3c000 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3c000 (9): Bad file descriptor
00:26:11.535  [2024-12-10 00:07:27.233275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state
00:26:11.535  [2024-12-10 00:07:27.233301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed
00:26:11.535  [2024-12-10 00:07:27.233310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1aa30 is same with the state(6) to be set
00:26:11.535  [2024-12-10 00:07:27.233316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state.
00:26:11.535  [2024-12-10 00:07:27.233326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed.
00:26:11.535  [2024-12-10 00:07:27.233483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.535  [2024-12-10 00:07:27.233733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.535  [2024-12-10 00:07:27.233740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.233989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.233997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.536  [2024-12-10 00:07:27.234456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.536  [2024-12-10 00:07:27.234463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.234471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.234478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.234487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.234493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.234501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8b140 is same with the state(6) to be set
00:26:11.537  [2024-12-10 00:07:27.234580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.537  [2024-12-10 00:07:27.234589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.234597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.537  [2024-12-10 00:07:27.234605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.234613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.537  [2024-12-10 00:07:27.234620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.234628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:26:11.537  [2024-12-10 00:07:27.234634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.234640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98150 is same with the state(6) to be set
00:26:11.537  [2024-12-10 00:07:27.234656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea9c80 (9): Bad file descriptor
00:26:11.537  [2024-12-10 00:07:27.234667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x951610 (9): Bad file descriptor
00:26:11.537  [2024-12-10 00:07:27.234683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa30920 (9): Bad file descriptor
00:26:11.537  [2024-12-10 00:07:27.234698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe60e90 (9): Bad file descriptor
00:26:11.537  [2024-12-10 00:07:27.234718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa302d0 (9): Bad file descriptor
00:26:11.537  [2024-12-10 00:07:27.235672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller
00:26:11.537  [2024-12-10 00:07:27.235736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.235986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.235996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.537  [2024-12-10 00:07:27.236378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.537  [2024-12-10 00:07:27.236386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.236733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.236740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.242308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.242319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.242329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.242336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.242344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc403d0 is same with the state(6) to be set
00:26:11.538  [2024-12-10 00:07:27.243326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.538  [2024-12-10 00:07:27.243679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.538  [2024-12-10 00:07:27.243687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.243984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.243992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.244367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.244375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc414c0 is same with the state(6) to be set
00:26:11.539  [2024-12-10 00:07:27.245355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:26:11.539  [2024-12-10 00:07:27.245375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller
00:26:11.539  [2024-12-10 00:07:27.245545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.539  [2024-12-10 00:07:27.245558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea9c80 with addr=10.0.0.2, port=4420
00:26:11.539  [2024-12-10 00:07:27.245567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea9c80 is same with the state(6) to be set
00:26:11.539  [2024-12-10 00:07:27.245598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe98150 (9): Bad file descriptor
00:26:11.539  [2024-12-10 00:07:27.246129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.539  [2024-12-10 00:07:27.246148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3c490 with addr=10.0.0.2, port=4420
00:26:11.539  [2024-12-10 00:07:27.246161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3c490 is same with the state(6) to be set
00:26:11.539  [2024-12-10 00:07:27.246256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.539  [2024-12-10 00:07:27.246268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa304d0 with addr=10.0.0.2, port=4420
00:26:11.539  [2024-12-10 00:07:27.246276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa304d0 is same with the state(6) to be set
00:26:11.539  [2024-12-10 00:07:27.246285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea9c80 (9): Bad file descriptor
00:26:11.539  [2024-12-10 00:07:27.246296] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress.
00:26:11.539  [2024-12-10 00:07:27.246771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.246786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.246800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.246808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.246819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.246828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.246837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.539  [2024-12-10 00:07:27.246845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.539  [2024-12-10 00:07:27.246853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.246861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.246870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.246878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.246887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.246895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.246904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.246914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.246923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.246932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.246941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.246948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.246961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.246969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.246978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.246985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.246993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.247841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.247849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe409c0 is same with the state(6) to be set
00:26:11.540  [2024-12-10 00:07:27.248831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.248848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.248859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.540  [2024-12-10 00:07:27.248867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.540  [2024-12-10 00:07:27.248877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.248884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.248894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.248901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.248911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.248918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.248928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.248935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.248944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.248951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.248961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.248968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.248979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.248987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.248996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.541  [2024-12-10 00:07:27.249912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.541  [2024-12-10 00:07:27.249920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe41cf0 is same with the state(6) to be set
00:26:11.542  [2024-12-10 00:07:27.250925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.250943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.250955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.250963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.250973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.250981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.250991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.250999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.542  [2024-12-10 00:07:27.251920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.542  [2024-12-10 00:07:27.251929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.251936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.251946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.251953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.251962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.251970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.251979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.251986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.251994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.252002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.252010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.252018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.252027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.252036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.252043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe43020 is same with the state(6) to be set
00:26:11.543  [2024-12-10 00:07:27.253052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.543  [2024-12-10 00:07:27.253990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.543  [2024-12-10 00:07:27.253999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.544  [2024-12-10 00:07:27.254009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.544  [2024-12-10 00:07:27.254017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.544  [2024-12-10 00:07:27.254027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.544  [2024-12-10 00:07:27.254034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.544  [2024-12-10 00:07:27.254044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.544  [2024-12-10 00:07:27.254052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.544  [2024-12-10 00:07:27.254061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.544  [2024-12-10 00:07:27.254068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.544  [2024-12-10 00:07:27.254078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.544  [2024-12-10 00:07:27.254085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.544  [2024-12-10 00:07:27.254094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.544  [2024-12-10 00:07:27.254102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.544  [2024-12-10 00:07:27.254110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.544  [2024-12-10 00:07:27.254118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.544  [2024-12-10 00:07:27.254127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.544  [2024-12-10 00:07:27.254134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.544  [2024-12-10 00:07:27.254144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:26:11.544  [2024-12-10 00:07:27.254152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:26:11.544  [2024-12-10 00:07:27.254161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b3d9f0 is same with the state(6) to be set
00:26:11.544  [2024-12-10 00:07:27.255147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller
00:26:11.544  [2024-12-10 00:07:27.255164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller
00:26:11.544  [2024-12-10 00:07:27.255183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller
00:26:11.544  [2024-12-10 00:07:27.255196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller
00:26:11.544  [2024-12-10 00:07:27.255207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller
00:26:11.544  [2024-12-10 00:07:27.255249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3c490 (9): Bad file descriptor
00:26:11.544  [2024-12-10 00:07:27.255265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa304d0 (9): Bad file descriptor
00:26:11.544  [2024-12-10 00:07:27.255273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state
00:26:11.544  [2024-12-10 00:07:27.255281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed
00:26:11.544  [2024-12-10 00:07:27.255290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state.
00:26:11.544  [2024-12-10 00:07:27.255299] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed.
00:26:11.544  task offset: 24960 on job bdev=Nvme10n1 fails
00:26:11.544  
00:26:11.544                                                                                                  Latency(us)
00:26:11.544  
[2024-12-09T23:07:27.401Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:26:11.544  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:11.544  Job: Nvme1n1 ended in about 0.92 seconds with error
00:26:11.544  	 Verification LBA range: start 0x0 length 0x400
00:26:11.544  	 Nvme1n1             :       0.92     208.02      13.00      69.34     0.00  228396.74   14854.83  216705.71
00:26:11.544  Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:11.544  Job: Nvme2n1 ended in about 0.93 seconds with error
00:26:11.544  	 Verification LBA range: start 0x0 length 0x400
00:26:11.544  	 Nvme2n1             :       0.93     207.57      12.97      69.19     0.00  224692.66   17601.10  217704.35
00:26:11.544  Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:11.544  Job: Nvme3n1 ended in about 0.90 seconds with error
00:26:11.544  	 Verification LBA range: start 0x0 length 0x400
00:26:11.544  	 Nvme3n1             :       0.90     284.03      17.75       4.44     0.00  211190.03    1763.23  217704.35
00:26:11.544  Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:11.544  Job: Nvme4n1 ended in about 0.93 seconds with error
00:26:11.544  	 Verification LBA range: start 0x0 length 0x400
00:26:11.544  	 Nvme4n1             :       0.93     206.79      12.92      68.93     0.00  217841.62   22843.98  207717.91
00:26:11.544  Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:11.544  Job: Nvme5n1 ended in about 0.93 seconds with error
00:26:11.544  	 Verification LBA range: start 0x0 length 0x400
00:26:11.544  	 Nvme5n1             :       0.93     206.33      12.90      68.78     0.00  214487.53   27587.54  206719.27
00:26:11.544  Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:11.544  Job: Nvme6n1 ended in about 0.93 seconds with error
00:26:11.544  	 Verification LBA range: start 0x0 length 0x400
00:26:11.544  	 Nvme6n1             :       0.93     205.86      12.87      68.62     0.00  211188.54   16103.13  220700.28
00:26:11.544  Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:11.544  Job: Nvme7n1 ended in about 0.93 seconds with error
00:26:11.544  	 Verification LBA range: start 0x0 length 0x400
00:26:11.544  	 Nvme7n1             :       0.93     209.68      13.10      68.47     0.00  204639.52   12982.37  219701.64
00:26:11.544  Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:11.544  Job: Nvme8n1 ended in about 0.92 seconds with error
00:26:11.544  	 Verification LBA range: start 0x0 length 0x400
00:26:11.544  	 Nvme8n1             :       0.92     213.02      13.31      69.91     0.00  196864.29   12483.05  218702.99
00:26:11.544  Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:11.544  	 Verification LBA range: start 0x0 length 0x400
00:26:11.544  	 Nvme9n1             :       0.91     280.61      17.54       0.00     0.00  194470.03    4774.77  244667.73
00:26:11.544  Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:26:11.544  Job: Nvme10n1 ended in about 0.90 seconds with error
00:26:11.544  	 Verification LBA range: start 0x0 length 0x400
00:26:11.544  	 Nvme10n1            :       0.90     214.14      13.38      71.38     0.00  186752.58    3105.16  225693.50
00:26:11.544  
[2024-12-09T23:07:27.401Z]  ===================================================================================================================
00:26:11.544  
[2024-12-09T23:07:27.401Z]  Total                       :               2236.04     139.75     559.06     0.00  209034.59    1763.23  244667.73
00:26:11.544  [2024-12-10 00:07:27.286306] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:26:11.544  [2024-12-10 00:07:27.286359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller
00:26:11.544  [2024-12-10 00:07:27.286667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.544  [2024-12-10 00:07:27.286687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe98370 with addr=10.0.0.2, port=4420
00:26:11.544  [2024-12-10 00:07:27.286698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98370 is same with the state(6) to be set
00:26:11.544  [2024-12-10 00:07:27.286852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.544  [2024-12-10 00:07:27.286863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3c000 with addr=10.0.0.2, port=4420
00:26:11.544  [2024-12-10 00:07:27.286870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3c000 is same with the state(6) to be set
00:26:11.544  [2024-12-10 00:07:27.287086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.544  [2024-12-10 00:07:27.287102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa302d0 with addr=10.0.0.2, port=4420
00:26:11.544  [2024-12-10 00:07:27.287110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa302d0 is same with the state(6) to be set
00:26:11.544  [2024-12-10 00:07:27.287326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.544  [2024-12-10 00:07:27.287338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe60e90 with addr=10.0.0.2, port=4420
00:26:11.544  [2024-12-10 00:07:27.287347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe60e90 is same with the state(6) to be set
00:26:11.544  [2024-12-10 00:07:27.287560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.544  [2024-12-10 00:07:27.287571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa30920 with addr=10.0.0.2, port=4420
00:26:11.544  [2024-12-10 00:07:27.287578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa30920 is same with the state(6) to be set
00:26:11.544  [2024-12-10 00:07:27.287587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state
00:26:11.544  [2024-12-10 00:07:27.287593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed
00:26:11.544  [2024-12-10 00:07:27.287601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:26:11.544  [2024-12-10 00:07:27.287611] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:26:11.544  [2024-12-10 00:07:27.287620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state
00:26:11.544  [2024-12-10 00:07:27.287626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed
00:26:11.544  [2024-12-10 00:07:27.287633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state.
00:26:11.544  [2024-12-10 00:07:27.287640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed.
00:26:11.544  [2024-12-10 00:07:27.288765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.544  [2024-12-10 00:07:27.288787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x951610 with addr=10.0.0.2, port=4420
00:26:11.544  [2024-12-10 00:07:27.288797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x951610 is same with the state(6) to be set
00:26:11.544  [2024-12-10 00:07:27.288811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe98370 (9): Bad file descriptor
00:26:11.544  [2024-12-10 00:07:27.288822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3c000 (9): Bad file descriptor
00:26:11.544  [2024-12-10 00:07:27.288832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa302d0 (9): Bad file descriptor
00:26:11.544  [2024-12-10 00:07:27.288845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe60e90 (9): Bad file descriptor
00:26:11.544  [2024-12-10 00:07:27.288855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa30920 (9): Bad file descriptor
00:26:11.544  [2024-12-10 00:07:27.288893] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress.
00:26:11.544  [2024-12-10 00:07:27.288906] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress.
00:26:11.544  [2024-12-10 00:07:27.288916] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress.
00:26:11.544  [2024-12-10 00:07:27.288926] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress.
00:26:11.544  [2024-12-10 00:07:27.288936] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress.
00:26:11.544  [2024-12-10 00:07:27.288947] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress.
00:26:11.544  [2024-12-10 00:07:27.288992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller
00:26:11.544  [2024-12-10 00:07:27.289033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x951610 (9): Bad file descriptor
00:26:11.544  [2024-12-10 00:07:27.289043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state
00:26:11.544  [2024-12-10 00:07:27.289049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed
00:26:11.544  [2024-12-10 00:07:27.289057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state.
00:26:11.544  [2024-12-10 00:07:27.289064] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed.
00:26:11.544  [2024-12-10 00:07:27.289071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state
00:26:11.544  [2024-12-10 00:07:27.289077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed
00:26:11.544  [2024-12-10 00:07:27.289084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state.
00:26:11.544  [2024-12-10 00:07:27.289090] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed.
00:26:11.544  [2024-12-10 00:07:27.289097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state
00:26:11.544  [2024-12-10 00:07:27.289104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed
00:26:11.544  [2024-12-10 00:07:27.289111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state.
00:26:11.544  [2024-12-10 00:07:27.289117] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed.
00:26:11.544  [2024-12-10 00:07:27.289124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state
00:26:11.544  [2024-12-10 00:07:27.289130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed
00:26:11.544  [2024-12-10 00:07:27.289136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state.
00:26:11.544  [2024-12-10 00:07:27.289142] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed.
00:26:11.544  [2024-12-10 00:07:27.289150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state
00:26:11.544  [2024-12-10 00:07:27.289159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed
00:26:11.544  [2024-12-10 00:07:27.289196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state.
00:26:11.544  [2024-12-10 00:07:27.289203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed.
00:26:11.544  [2024-12-10 00:07:27.289262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller
00:26:11.544  [2024-12-10 00:07:27.289273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:26:11.544  [2024-12-10 00:07:27.289281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller
00:26:11.544  [2024-12-10 00:07:27.289430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.545  [2024-12-10 00:07:27.289443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe98150 with addr=10.0.0.2, port=4420
00:26:11.545  [2024-12-10 00:07:27.289451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe98150 is same with the state(6) to be set
00:26:11.545  [2024-12-10 00:07:27.289459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state
00:26:11.545  [2024-12-10 00:07:27.289465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed
00:26:11.545  [2024-12-10 00:07:27.289472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state.
00:26:11.545  [2024-12-10 00:07:27.289479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed.
00:26:11.545  [2024-12-10 00:07:27.289650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.545  [2024-12-10 00:07:27.289662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa304d0 with addr=10.0.0.2, port=4420
00:26:11.545  [2024-12-10 00:07:27.289670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa304d0 is same with the state(6) to be set
00:26:11.545  [2024-12-10 00:07:27.289884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.545  [2024-12-10 00:07:27.289895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa3c490 with addr=10.0.0.2, port=4420
00:26:11.545  [2024-12-10 00:07:27.289903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa3c490 is same with the state(6) to be set
00:26:11.545  [2024-12-10 00:07:27.290078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:26:11.545  [2024-12-10 00:07:27.290090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xea9c80 with addr=10.0.0.2, port=4420
00:26:11.545  [2024-12-10 00:07:27.290098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xea9c80 is same with the state(6) to be set
00:26:11.545  [2024-12-10 00:07:27.290108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe98150 (9): Bad file descriptor
00:26:11.545  [2024-12-10 00:07:27.290137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa304d0 (9): Bad file descriptor
00:26:11.545  [2024-12-10 00:07:27.290148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3c490 (9): Bad file descriptor
00:26:11.545  [2024-12-10 00:07:27.290157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xea9c80 (9): Bad file descriptor
00:26:11.545  [2024-12-10 00:07:27.290169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state
00:26:11.545  [2024-12-10 00:07:27.290177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed
00:26:11.545  [2024-12-10 00:07:27.290183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state.
00:26:11.545  [2024-12-10 00:07:27.290189] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed.
00:26:11.545  [2024-12-10 00:07:27.290215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state
00:26:11.545  [2024-12-10 00:07:27.290224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed
00:26:11.545  [2024-12-10 00:07:27.290230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state.
00:26:11.545  [2024-12-10 00:07:27.290236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed.
00:26:11.545  [2024-12-10 00:07:27.290244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state
00:26:11.545  [2024-12-10 00:07:27.290250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed
00:26:11.545  [2024-12-10 00:07:27.290257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:26:11.545  [2024-12-10 00:07:27.290263] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:26:11.545  [2024-12-10 00:07:27.290270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state
00:26:11.545  [2024-12-10 00:07:27.290276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed
00:26:11.545  [2024-12-10 00:07:27.290282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state.
00:26:11.545  [2024-12-10 00:07:27.290288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed.
00:26:11.804   00:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3150359
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3150359
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:13.180    00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3150359
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20}
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:26:13.180  rmmod nvme_tcp
00:26:13.180  rmmod nvme_fabrics
00:26:13.180  rmmod nvme_keyring
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3150135 ']'
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3150135
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3150135 ']'
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3150135
00:26:13.180  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3150135) - No such process
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3150135 is not found'
00:26:13.180  Process with pid 3150135 is not found
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:13.180   00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:13.180    00:07:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:26:15.091  
00:26:15.091  real	0m7.074s
00:26:15.091  user	0m16.110s
00:26:15.091  sys	0m1.255s
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x
00:26:15.091  ************************************
00:26:15.091  END TEST nvmf_shutdown_tc3
00:26:15.091  ************************************
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]]
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]]
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:26:15.091  ************************************
00:26:15.091  START TEST nvmf_shutdown_tc4
00:26:15.091  ************************************
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:15.091    00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=()
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=()
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=()
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=()
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=()
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=()
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=()
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:26:15.091  Found 0000:af:00.0 (0x8086 - 0x159b)
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:15.091   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:26:15.092  Found 0000:af:00.1 (0x8086 - 0x159b)
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:26:15.092  Found net devices under 0000:af:00.0: cvl_0_0
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:26:15.092  Found net devices under 0000:af:00.1: cvl_0_1
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:26:15.092   00:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:26:15.355  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:26:15.355  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms
00:26:15.355  
00:26:15.355  --- 10.0.0.2 ping statistics ---
00:26:15.355  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:15.355  rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:26:15.355  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:26:15.355  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms
00:26:15.355  
00:26:15.355  --- 10.0.0.1 ping statistics ---
00:26:15.355  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:15.355  rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3151507
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3151507
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3151507 ']'
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:15.355  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:15.355   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:26:15.618  [2024-12-10 00:07:31.237992] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:26:15.618  [2024-12-10 00:07:31.238040] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:15.618  [2024-12-10 00:07:31.314418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:26:15.618  [2024-12-10 00:07:31.355017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:15.618  [2024-12-10 00:07:31.355054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:15.618  [2024-12-10 00:07:31.355061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:26:15.618  [2024-12-10 00:07:31.355067] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:26:15.618  [2024-12-10 00:07:31.355072] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:15.618  [2024-12-10 00:07:31.356578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:26:15.618  [2024-12-10 00:07:31.356689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:26:15.618  [2024-12-10 00:07:31.356773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:26:15.618  [2024-12-10 00:07:31.356774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:26:15.618   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:15.618   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0
00:26:15.618   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:26:15.618   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:15.618   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:26:15.877   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:26:15.878  [2024-12-10 00:07:31.494178] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10})
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}"
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:15.878   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:26:15.878  Malloc1
00:26:15.878  [2024-12-10 00:07:31.602210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:15.878  Malloc2
00:26:15.878  Malloc3
00:26:15.878  Malloc4
00:26:16.136  Malloc5
00:26:16.136  Malloc6
00:26:16.136  Malloc7
00:26:16.136  Malloc8
00:26:16.136  Malloc9
00:26:16.136  Malloc10
00:26:16.136   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:16.136   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems
00:26:16.136   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:16.136   00:07:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:26:16.395   00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3151650
00:26:16.395   00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4
00:26:16.395   00:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5
00:26:16.395  [2024-12-10 00:07:32.101365] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:26:21.673   00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:26:21.673   00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3151507
00:26:21.673   00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3151507 ']'
00:26:21.673   00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3151507
00:26:21.673    00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname
00:26:21.673   00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:21.673    00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3151507
00:26:21.673   00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:26:21.673   00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:26:21.673   00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3151507'
00:26:21.673  killing process with pid 3151507
00:26:21.673   00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3151507
00:26:21.673   00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3151507
00:26:21.673  [2024-12-10 00:07:37.102070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7d090 is same with the state(6) to be set
00:26:21.673  [2024-12-10 00:07:37.102130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7d090 is same with the state(6) to be set
00:26:21.673  [2024-12-10 00:07:37.102592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805040 is same with the state(6) to be set
00:26:21.673  [2024-12-10 00:07:37.102628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805040 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.102636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805040 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.102642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805040 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.102649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805040 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.102662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805040 is same with the state(6) to be set
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  [2024-12-10 00:07:37.103408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:26:21.674  NVMe io qpair process completion error
00:26:21.674  [2024-12-10 00:07:37.104391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8f80 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.104416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8f80 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.104423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8f80 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.104429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8f80 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.104435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8f80 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.104441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8f80 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.105036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9450 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.105066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9450 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.105074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9450 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.105080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9450 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.105087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9450 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.105093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f9450 is same with the state(6) to be set
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  [2024-12-10 00:07:37.106352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8ab0 is same with the state(6) to be set
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  [2024-12-10 00:07:37.106379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8ab0 is same with the state(6) to be set
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  [2024-12-10 00:07:37.106389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8ab0 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.106398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8ab0 is same with the state(6) to be set
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  [2024-12-10 00:07:37.106405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8ab0 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.106413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19f8ab0 is same with the state(6) to be set
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  [2024-12-10 00:07:37.106727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  [2024-12-10 00:07:37.106940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18058b0 is same with the state(6) to be set
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  [2024-12-10 00:07:37.106961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18058b0 is same with the state(6) to be set
00:26:21.674  starting I/O failed: -6
00:26:21.674  [2024-12-10 00:07:37.106968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18058b0 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.106975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18058b0 is same with the state(6) to be set
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  [2024-12-10 00:07:37.106982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18058b0 is same with the state(6) to be set
00:26:21.674  [2024-12-10 00:07:37.106988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18058b0 is same with the state(6) to be set
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  starting I/O failed: -6
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.674  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  [2024-12-10 00:07:37.107313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805d80 is same with the state(6) to be set
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  [2024-12-10 00:07:37.107336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805d80 is same with the state(6) to be set
00:26:21.675  [2024-12-10 00:07:37.107344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805d80 is same with the state(6) to be set
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  [2024-12-10 00:07:37.107351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805d80 is same with the state(6) to be set
00:26:21.675  [2024-12-10 00:07:37.107357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805d80 is same with the state(6) to be set
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  [2024-12-10 00:07:37.107364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805d80 is same with the state(6) to be set
00:26:21.675  starting I/O failed: -6
00:26:21.675  [2024-12-10 00:07:37.107370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805d80 is same with the state(6) to be set
00:26:21.675  [2024-12-10 00:07:37.107377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1805d80 is same with the state(6) to be set
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  [2024-12-10 00:07:37.107520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  [2024-12-10 00:07:37.107765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1806270 is same with the state(6) to be set
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  [2024-12-10 00:07:37.107785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1806270 is same with starting I/O failed: -6
00:26:21.675  the state(6) to be set
00:26:21.675  [2024-12-10 00:07:37.107793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1806270 is same with the state(6) to be set
00:26:21.675  [2024-12-10 00:07:37.107801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1806270 is same with the state(6) to be set
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  [2024-12-10 00:07:37.107808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1806270 is same with the state(6) to be set
00:26:21.675  [2024-12-10 00:07:37.107815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1806270 is same with the state(6) to be set
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  [2024-12-10 00:07:37.108120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18053e0 is same with the state(6) to be set
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  [2024-12-10 00:07:37.108140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18053e0 is same with the state(6) to be set
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  [2024-12-10 00:07:37.108150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18053e0 is same with the state(6) to be set
00:26:21.675  starting I/O failed: -6
00:26:21.675  [2024-12-10 00:07:37.108159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18053e0 is same with the state(6) to be set
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  [2024-12-10 00:07:37.108179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18053e0 is same with the state(6) to be set
00:26:21.675  starting I/O failed: -6
00:26:21.675  [2024-12-10 00:07:37.108186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18053e0 is same with the state(6) to be set
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  [2024-12-10 00:07:37.108526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.675  starting I/O failed: -6
00:26:21.675  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  [2024-12-10 00:07:37.110123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:26:21.676  NVMe io qpair process completion error
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  [2024-12-10 00:07:37.111061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  starting I/O failed: -6
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.676  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  [2024-12-10 00:07:37.111860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  [2024-12-10 00:07:37.112880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.677  starting I/O failed: -6
00:26:21.677  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  [2024-12-10 00:07:37.114641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:26:21.678  NVMe io qpair process completion error
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  [2024-12-10 00:07:37.115643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  [2024-12-10 00:07:37.116503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.678  Write completed with error (sct=0, sc=8)
00:26:21.678  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  [2024-12-10 00:07:37.117515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.679  Write completed with error (sct=0, sc=8)
00:26:21.679  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  [2024-12-10 00:07:37.119091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:26:21.680  NVMe io qpair process completion error
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  [2024-12-10 00:07:37.120208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  [2024-12-10 00:07:37.121105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.680  starting I/O failed: -6
00:26:21.680  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  [2024-12-10 00:07:37.122135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  [2024-12-10 00:07:37.124228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:26:21.681  NVMe io qpair process completion error
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  [2024-12-10 00:07:37.125192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  Write completed with error (sct=0, sc=8)
00:26:21.681  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  [2024-12-10 00:07:37.126081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  [2024-12-10 00:07:37.127094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.682  Write completed with error (sct=0, sc=8)
00:26:21.682  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  [2024-12-10 00:07:37.130872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:26:21.683  NVMe io qpair process completion error
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  [2024-12-10 00:07:37.131896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  [2024-12-10 00:07:37.132781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.683  starting I/O failed: -6
00:26:21.683  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  [2024-12-10 00:07:37.133774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.684  Write completed with error (sct=0, sc=8)
00:26:21.684  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  [2024-12-10 00:07:37.137792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:26:21.685  NVMe io qpair process completion error
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  [2024-12-10 00:07:37.138856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  [2024-12-10 00:07:37.139726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  starting I/O failed: -6
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.685  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  [2024-12-10 00:07:37.140793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  [2024-12-10 00:07:37.142570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:21.686  NVMe io qpair process completion error
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  Write completed with error (sct=0, sc=8)
00:26:21.686  starting I/O failed: -6
00:26:21.687  [2024-12-10 00:07:37.143598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  [2024-12-10 00:07:37.144466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  [2024-12-10 00:07:37.145440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.687  Write completed with error (sct=0, sc=8)
00:26:21.687  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  starting I/O failed: -6
00:26:21.688  [2024-12-10 00:07:37.147222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:26:21.688  NVMe io qpair process completion error
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.688  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  [2024-12-10 00:07:37.151620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  [2024-12-10 00:07:37.152532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.689  Write completed with error (sct=0, sc=8)
00:26:21.689  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  [2024-12-10 00:07:37.153531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  Write completed with error (sct=0, sc=8)
00:26:21.690  starting I/O failed: -6
00:26:21.690  [2024-12-10 00:07:37.157656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:26:21.690  NVMe io qpair process completion error
00:26:21.690  Initializing NVMe Controllers
00:26:21.690  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2
00:26:21.690  Controller IO queue size 128, less than required.
00:26:21.690  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:26:21.690  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4
00:26:21.690  Controller IO queue size 128, less than required.
00:26:21.690  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:26:21.690  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10
00:26:21.690  Controller IO queue size 128, less than required.
00:26:21.690  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:26:21.690  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:26:21.690  Controller IO queue size 128, less than required.
00:26:21.690  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:26:21.690  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7
00:26:21.690  Controller IO queue size 128, less than required.
00:26:21.690  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:26:21.690  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8
00:26:21.690  Controller IO queue size 128, less than required.
00:26:21.690  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:26:21.691  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5
00:26:21.691  Controller IO queue size 128, less than required.
00:26:21.691  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:26:21.691  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6
00:26:21.691  Controller IO queue size 128, less than required.
00:26:21.691  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:26:21.691  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3
00:26:21.691  Controller IO queue size 128, less than required.
00:26:21.691  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:26:21.691  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9
00:26:21.691  Controller IO queue size 128, less than required.
00:26:21.691  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:26:21.691  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0
00:26:21.691  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0
00:26:21.691  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0
00:26:21.691  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:26:21.691  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0
00:26:21.691  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0
00:26:21.691  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0
00:26:21.691  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0
00:26:21.691  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0
00:26:21.691  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0
00:26:21.691  Initialization complete. Launching workers.
00:26:21.691  ========================================================
00:26:21.691                                                                                                                Latency(us)
00:26:21.691  Device Information                                                        :       IOPS      MiB/s    Average        min        max
00:26:21.691  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1  from core  0:    2164.45      93.00   59090.50     480.83  114490.18
00:26:21.691  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1  from core  0:    2181.22      93.72   58686.51     703.22  134314.00
00:26:21.691  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core  0:    2182.28      93.77   58052.98     629.17  108968.49
00:26:21.691  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1  from core  0:    2175.28      93.47   58248.54     746.37  105620.86
00:26:21.691  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1  from core  0:    2185.46      93.91   57988.56     709.79  104406.26
00:26:21.691  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1  from core  0:    2185.25      93.90   58006.42     501.28  104168.47
00:26:21.691  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1  from core  0:    2208.38      94.89   57415.38     678.55  102487.05
00:26:21.691  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1  from core  0:    2208.38      94.89   57452.68     734.78  105733.55
00:26:21.691  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1  from core  0:    2201.80      94.61   57664.64     665.66  109858.05
00:26:21.691  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1  from core  0:    2199.04      94.49   57749.59     896.21  112151.30
00:26:21.691  ========================================================
00:26:21.691  Total                                                                     :   21891.55     940.65   58032.63     480.83  134314.00
00:26:21.691  
00:26:21.691  [2024-12-10 00:07:37.161360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1305ae0 is same with the state(6) to be set
00:26:21.691  [2024-12-10 00:07:37.161406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303890 is same with the state(6) to be set
00:26:21.691  [2024-12-10 00:07:37.161435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1305900 is same with the state(6) to be set
00:26:21.691  [2024-12-10 00:07:37.161462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1305720 is same with the state(6) to be set
00:26:21.691  [2024-12-10 00:07:37.161490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304410 is same with the state(6) to be set
00:26:21.691  [2024-12-10 00:07:37.161517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304740 is same with the state(6) to be set
00:26:21.691  [2024-12-10 00:07:37.161544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303bc0 is same with the state(6) to be set
00:26:21.691  [2024-12-10 00:07:37.161571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303ef0 is same with the state(6) to be set
00:26:21.691  [2024-12-10 00:07:37.161600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1303560 is same with the state(6) to be set
00:26:21.691  [2024-12-10 00:07:37.161628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1304a70 is same with the state(6) to be set
00:26:21.691  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred
00:26:21.691   00:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1
00:26:22.627   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3151650
00:26:22.627   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0
00:26:22.627   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3151650
00:26:22.627   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait
00:26:22.627   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:22.627    00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait
00:26:22.627   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:22.627   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3151650
00:26:22.627   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1
00:26:22.627   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:26:22.627   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:26:22.627   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:26:22.627   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget
00:26:22.627   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20}
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:26:22.886  rmmod nvme_tcp
00:26:22.886  rmmod nvme_fabrics
00:26:22.886  rmmod nvme_keyring
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3151507 ']'
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3151507
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3151507 ']'
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3151507
00:26:22.886  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3151507) - No such process
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3151507 is not found'
00:26:22.886  Process with pid 3151507 is not found
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:22.886   00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:22.886    00:07:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:24.789   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:26:24.789  
00:26:24.789  real	0m9.780s
00:26:24.789  user	0m24.921s
00:26:24.789  sys	0m5.161s
00:26:24.789   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:24.789   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x
00:26:24.789  ************************************
00:26:24.789  END TEST nvmf_shutdown_tc4
00:26:24.789  ************************************
00:26:25.047   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT
00:26:25.047  
00:26:25.047  real	0m40.525s
00:26:25.047  user	1m39.380s
00:26:25.047  sys	0m13.970s
00:26:25.047   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:25.047   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x
00:26:25.047  ************************************
00:26:25.047  END TEST nvmf_shutdown
00:26:25.047  ************************************
00:26:25.047   00:07:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp
00:26:25.047   00:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:26:25.047   00:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:25.047   00:07:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:26:25.047  ************************************
00:26:25.047  START TEST nvmf_nsid
00:26:25.047  ************************************
00:26:25.047   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp
00:26:25.047  * Looking for test storage...
00:26:25.047  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:26:25.047    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:25.047     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version
00:26:25.047     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:25.306    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:25.306    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:25.306    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:25.306    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:25.306    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-:
00:26:25.306    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1
00:26:25.306    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-:
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<'
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:25.307  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:25.307  		--rc genhtml_branch_coverage=1
00:26:25.307  		--rc genhtml_function_coverage=1
00:26:25.307  		--rc genhtml_legend=1
00:26:25.307  		--rc geninfo_all_blocks=1
00:26:25.307  		--rc geninfo_unexecuted_blocks=1
00:26:25.307  		
00:26:25.307  		'
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:25.307  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:25.307  		--rc genhtml_branch_coverage=1
00:26:25.307  		--rc genhtml_function_coverage=1
00:26:25.307  		--rc genhtml_legend=1
00:26:25.307  		--rc geninfo_all_blocks=1
00:26:25.307  		--rc geninfo_unexecuted_blocks=1
00:26:25.307  		
00:26:25.307  		'
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:25.307  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:25.307  		--rc genhtml_branch_coverage=1
00:26:25.307  		--rc genhtml_function_coverage=1
00:26:25.307  		--rc genhtml_legend=1
00:26:25.307  		--rc geninfo_all_blocks=1
00:26:25.307  		--rc geninfo_unexecuted_blocks=1
00:26:25.307  		
00:26:25.307  		'
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:25.307  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:25.307  		--rc genhtml_branch_coverage=1
00:26:25.307  		--rc genhtml_function_coverage=1
00:26:25.307  		--rc genhtml_legend=1
00:26:25.307  		--rc geninfo_all_blocks=1
00:26:25.307  		--rc geninfo_unexecuted_blocks=1
00:26:25.307  		
00:26:25.307  		'
00:26:25.307   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:25.307     00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:25.307      00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:25.307      00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:25.307      00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:25.307      00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH
00:26:25.307      00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:25.307    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:26:25.308  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:26:25.308    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:26:25.308    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:26:25.308    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid=
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:25.308    00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable
00:26:25.308   00:07:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:26:31.875   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:26:31.875   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=()
00:26:31.875   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs
00:26:31.875   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=()
00:26:31.875   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:26:31.875   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=()
00:26:31.875   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers
00:26:31.875   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=()
00:26:31.875   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=()
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=()
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=()
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:26:31.876  Found 0000:af:00.0 (0x8086 - 0x159b)
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:26:31.876  Found 0000:af:00.1 (0x8086 - 0x159b)
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:26:31.876  Found net devices under 0000:af:00.0: cvl_0_0
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:26:31.876  Found net devices under 0000:af:00.1: cvl_0_1
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:26:31.876   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:26:31.877  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:26:31.877  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms
00:26:31.877  
00:26:31.877  --- 10.0.0.2 ping statistics ---
00:26:31.877  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:31.877  rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:26:31.877  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:26:31.877  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms
00:26:31.877  
00:26:31.877  --- 10.0.0.1 ping statistics ---
00:26:31.877  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:31.877  rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=3156211
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 3156211
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3156211 ']'
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:31.877  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:31.877   00:07:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:26:31.877  [2024-12-10 00:07:46.920024] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:26:31.877  [2024-12-10 00:07:46.920075] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:31.877  [2024-12-10 00:07:46.988579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:31.877  [2024-12-10 00:07:47.028974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:31.877  [2024-12-10 00:07:47.029006] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:31.877  [2024-12-10 00:07:47.029013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:26:31.877  [2024-12-10 00:07:47.029019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:26:31.877  [2024-12-10 00:07:47.029024] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:31.877  [2024-12-10 00:07:47.029516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3156253
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=()
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=bde31186-9ae4-4d0d-90a9-b9eb2cc17cee
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=1108958c-8d70-4adc-ba6f-135c0ab5023b
00:26:31.877    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=37b2bb4f-b1fc-4288-8973-fb2713644213
00:26:31.877   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd
00:26:31.878   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:31.878   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:26:31.878  null0
00:26:31.878  null1
00:26:31.878  [2024-12-10 00:07:47.213841] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:26:31.878  [2024-12-10 00:07:47.213884] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3156253 ]
00:26:31.878  null2
00:26:31.878  [2024-12-10 00:07:47.221346] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:31.878  [2024-12-10 00:07:47.245536] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:31.878   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:31.878   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3156253 /var/tmp/tgt2.sock
00:26:31.878   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3156253 ']'
00:26:31.878   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock
00:26:31.878   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:31.878   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...'
00:26:31.878  Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...
00:26:31.878   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:31.878   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:26:31.878  [2024-12-10 00:07:47.287346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:31.878  [2024-12-10 00:07:47.330718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:26:31.878   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:31.878   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0
00:26:31.878   00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock
00:26:32.136  [2024-12-10 00:07:47.838275] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:32.136  [2024-12-10 00:07:47.854364] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 ***
00:26:32.136  nvme0n1 nvme0n2
00:26:32.136  nvme1n1
00:26:32.136    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect
00:26:32.136    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr
00:26:32.136    00:07:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562
00:26:33.519    00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme*
00:26:33.520    00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]]
00:26:33.520    00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]]
00:26:33.520    00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0
00:26:33.520    00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0
00:26:33.520   00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0
00:26:33.520   00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1
00:26:33.520   00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0
00:26:33.520   00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:26:33.520   00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:26:33.520   00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']'
00:26:33.520   00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1
00:26:33.520   00:07:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1
00:26:34.456   00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:26:34.456   00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1
00:26:34.456   00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:26:34.456   00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1
00:26:34.456   00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0
00:26:34.456    00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid bde31186-9ae4-4d0d-90a9-b9eb2cc17cee
00:26:34.456    00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d -
00:26:34.456    00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1
00:26:34.456    00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid
00:26:34.456     00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json
00:26:34.456     00:07:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid
00:26:34.456    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=bde311869ae44d0d90a9b9eb2cc17cee
00:26:34.456    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BDE311869AE44D0D90A9B9EB2CC17CEE
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ BDE311869AE44D0D90A9B9EB2CC17CEE == \B\D\E\3\1\1\8\6\9\A\E\4\4\D\0\D\9\0\A\9\B\9\E\B\2\C\C\1\7\C\E\E ]]
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0
00:26:34.456    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 1108958c-8d70-4adc-ba6f-135c0ab5023b
00:26:34.456    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d -
00:26:34.456    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2
00:26:34.456    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid
00:26:34.456     00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json
00:26:34.456     00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid
00:26:34.456    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1108958c8d704adcba6f135c0ab5023b
00:26:34.456    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1108958C8D704ADCBA6F135C0AB5023B
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 1108958C8D704ADCBA6F135C0AB5023B == \1\1\0\8\9\5\8\C\8\D\7\0\4\A\D\C\B\A\6\F\1\3\5\C\0\A\B\5\0\2\3\B ]]
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3
00:26:34.456   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME
00:26:34.457   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3
00:26:34.457   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0
00:26:34.457    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 37b2bb4f-b1fc-4288-8973-fb2713644213
00:26:34.457    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d -
00:26:34.457    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3
00:26:34.457    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid
00:26:34.457     00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json
00:26:34.457     00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid
00:26:34.457    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=37b2bb4fb1fc42888973fb2713644213
00:26:34.457    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 37B2BB4FB1FC42888973FB2713644213
00:26:34.457   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 37B2BB4FB1FC42888973FB2713644213 == \3\7\B\2\B\B\4\F\B\1\F\C\4\2\8\8\8\9\7\3\F\B\2\7\1\3\6\4\4\2\1\3 ]]
00:26:34.457   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0
00:26:34.715   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT
00:26:34.715   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup
00:26:34.715   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3156253
00:26:34.715   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3156253 ']'
00:26:34.715   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3156253
00:26:34.715    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname
00:26:34.715   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:34.715    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3156253
00:26:34.715   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:26:34.715   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:26:34.715   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3156253'
00:26:34.715  killing process with pid 3156253
00:26:34.715   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3156253
00:26:34.715   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3156253
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20}
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:26:34.974  rmmod nvme_tcp
00:26:34.974  rmmod nvme_fabrics
00:26:34.974  rmmod nvme_keyring
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 3156211 ']'
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 3156211
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3156211 ']'
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3156211
00:26:34.974    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname
00:26:34.974   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:35.233    00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3156211
00:26:35.233   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:26:35.233   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:26:35.233   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3156211'
00:26:35.233  killing process with pid 3156211
00:26:35.233   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3156211
00:26:35.233   00:07:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3156211
00:26:35.233   00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:26:35.233   00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:26:35.233   00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:26:35.233   00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr
00:26:35.233   00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore
00:26:35.233   00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save
00:26:35.233   00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:26:35.233   00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:26:35.233   00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns
00:26:35.233   00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:35.233   00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:35.233    00:07:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:37.775   00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:26:37.775  
00:26:37.775  real	0m12.362s
00:26:37.775  user	0m9.548s
00:26:37.775  sys	0m5.587s
00:26:37.775   00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:37.775   00:07:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x
00:26:37.775  ************************************
00:26:37.775  END TEST nvmf_nsid
00:26:37.775  ************************************
00:26:37.775   00:07:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:26:37.775  
00:26:37.775  real	11m59.661s
00:26:37.775  user	25m43.270s
00:26:37.775  sys	3m42.747s
00:26:37.775   00:07:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:37.775   00:07:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x
00:26:37.775  ************************************
00:26:37.775  END TEST nvmf_target_extra
00:26:37.775  ************************************
00:26:37.775   00:07:53 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp
00:26:37.775   00:07:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:26:37.775   00:07:53 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:37.775   00:07:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:26:37.775  ************************************
00:26:37.775  START TEST nvmf_host
00:26:37.775  ************************************
00:26:37.775   00:07:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp
00:26:37.775  * Looking for test storage...
00:26:37.775  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:37.775     00:07:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version
00:26:37.775     00:07:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-:
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-:
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<'
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:37.775     00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1
00:26:37.775     00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1
00:26:37.775     00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:37.775     00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1
00:26:37.775     00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2
00:26:37.775     00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2
00:26:37.775     00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:37.775     00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:37.775  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:37.775  		--rc genhtml_branch_coverage=1
00:26:37.775  		--rc genhtml_function_coverage=1
00:26:37.775  		--rc genhtml_legend=1
00:26:37.775  		--rc geninfo_all_blocks=1
00:26:37.775  		--rc geninfo_unexecuted_blocks=1
00:26:37.775  		
00:26:37.775  		'
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:37.775  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:37.775  		--rc genhtml_branch_coverage=1
00:26:37.775  		--rc genhtml_function_coverage=1
00:26:37.775  		--rc genhtml_legend=1
00:26:37.775  		--rc geninfo_all_blocks=1
00:26:37.775  		--rc geninfo_unexecuted_blocks=1
00:26:37.775  		
00:26:37.775  		'
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:37.775  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:37.775  		--rc genhtml_branch_coverage=1
00:26:37.775  		--rc genhtml_function_coverage=1
00:26:37.775  		--rc genhtml_legend=1
00:26:37.775  		--rc geninfo_all_blocks=1
00:26:37.775  		--rc geninfo_unexecuted_blocks=1
00:26:37.775  		
00:26:37.775  		'
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:37.775  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:37.775  		--rc genhtml_branch_coverage=1
00:26:37.775  		--rc genhtml_function_coverage=1
00:26:37.775  		--rc genhtml_legend=1
00:26:37.775  		--rc geninfo_all_blocks=1
00:26:37.775  		--rc geninfo_unexecuted_blocks=1
00:26:37.775  		
00:26:37.775  		'
00:26:37.775   00:07:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:26:37.775     00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:37.775     00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:37.775    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:37.776      00:07:53 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:37.776      00:07:53 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:37.776      00:07:53 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:37.776      00:07:53 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH
00:26:37.776      00:07:53 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:26:37.776  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0
00:26:37.776   00:07:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:26:37.776   00:07:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@")
00:26:37.776   00:07:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]]
00:26:37.776   00:07:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp
00:26:37.776   00:07:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:26:37.776   00:07:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:37.776   00:07:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:26:37.776  ************************************
00:26:37.776  START TEST nvmf_multicontroller
00:26:37.776  ************************************
00:26:37.776   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp
00:26:37.776  * Looking for test storage...
00:26:37.776  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-:
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-:
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<'
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:37.776     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2
00:26:37.776    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:38.042  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:38.042  		--rc genhtml_branch_coverage=1
00:26:38.042  		--rc genhtml_function_coverage=1
00:26:38.042  		--rc genhtml_legend=1
00:26:38.042  		--rc geninfo_all_blocks=1
00:26:38.042  		--rc geninfo_unexecuted_blocks=1
00:26:38.042  		
00:26:38.042  		'
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:38.042  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:38.042  		--rc genhtml_branch_coverage=1
00:26:38.042  		--rc genhtml_function_coverage=1
00:26:38.042  		--rc genhtml_legend=1
00:26:38.042  		--rc geninfo_all_blocks=1
00:26:38.042  		--rc geninfo_unexecuted_blocks=1
00:26:38.042  		
00:26:38.042  		'
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:38.042  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:38.042  		--rc genhtml_branch_coverage=1
00:26:38.042  		--rc genhtml_function_coverage=1
00:26:38.042  		--rc genhtml_legend=1
00:26:38.042  		--rc geninfo_all_blocks=1
00:26:38.042  		--rc geninfo_unexecuted_blocks=1
00:26:38.042  		
00:26:38.042  		'
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:38.042  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:38.042  		--rc genhtml_branch_coverage=1
00:26:38.042  		--rc genhtml_function_coverage=1
00:26:38.042  		--rc genhtml_legend=1
00:26:38.042  		--rc geninfo_all_blocks=1
00:26:38.042  		--rc geninfo_unexecuted_blocks=1
00:26:38.042  		
00:26:38.042  		'
00:26:38.042   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:26:38.042     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:38.042    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:38.043     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:26:38.043     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob
00:26:38.043     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:38.043     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:38.043     00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:38.043      00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:38.043      00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:38.043      00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:38.043      00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH
00:26:38.043      00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:26:38.043  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']'
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:38.043    00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable
00:26:38.043   00:07:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=()
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=()
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=()
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=()
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=()
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=()
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=()
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:26:44.626  Found 0000:af:00.0 (0x8086 - 0x159b)
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:26:44.626  Found 0000:af:00.1 (0x8086 - 0x159b)
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:26:44.626  Found net devices under 0000:af:00.0: cvl_0_0
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:26:44.626  Found net devices under 0000:af:00.1: cvl_0_1
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:26:44.626   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:26:44.627  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:26:44.627  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms
00:26:44.627  
00:26:44.627  --- 10.0.0.2 ping statistics ---
00:26:44.627  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:44.627  rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:26:44.627  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:26:44.627  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms
00:26:44.627  
00:26:44.627  --- 10.0.0.1 ping statistics ---
00:26:44.627  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:44.627  rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3160462
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3160462
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3160462 ']'
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:44.627  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:44.627   00:07:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.627  [2024-12-10 00:07:59.628836] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:26:44.627  [2024-12-10 00:07:59.628888] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:44.627  [2024-12-10 00:07:59.708304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:26:44.627  [2024-12-10 00:07:59.749714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:44.627  [2024-12-10 00:07:59.749754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:44.627  [2024-12-10 00:07:59.749760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:26:44.627  [2024-12-10 00:07:59.749766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:26:44.627  [2024-12-10 00:07:59.749771] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:44.627  [2024-12-10 00:07:59.751128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:26:44.627  [2024-12-10 00:07:59.751238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:26:44.627  [2024-12-10 00:07:59.751238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:26:44.627   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:44.627   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0
00:26:44.627   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:26:44.627   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:44.627   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.887  [2024-12-10 00:08:00.514210] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.887  Malloc0
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.887  [2024-12-10 00:08:00.575840] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.887  [2024-12-10 00:08:00.583770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.887  Malloc1
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3160530
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3160530 /var/tmp/bdevperf.sock
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3160530 ']'
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:44.887   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:26:44.887  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:26:44.888   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:44.888   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:45.147   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:45.147   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0
00:26:45.147   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1
00:26:45.147   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:45.147   00:08:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:45.404  NVMe0n1
00:26:45.404   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:45.404   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:26:45.404   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe
00:26:45.404   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:45.404   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:45.404   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:45.404  1
00:26:45.404   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001
00:26:45.404   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0
00:26:45.404   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001
00:26:45.404   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:26:45.404   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:45.404    00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:45.405  request:
00:26:45.405  {
00:26:45.405  "name": "NVMe0",
00:26:45.405  "trtype": "tcp",
00:26:45.405  "traddr": "10.0.0.2",
00:26:45.405  "adrfam": "ipv4",
00:26:45.405  "trsvcid": "4420",
00:26:45.405  "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:26:45.405  "hostnqn": "nqn.2021-09-7.io.spdk:00001",
00:26:45.405  "hostaddr": "10.0.0.1",
00:26:45.405  "prchk_reftag": false,
00:26:45.405  "prchk_guard": false,
00:26:45.405  "hdgst": false,
00:26:45.405  "ddgst": false,
00:26:45.405  "allow_unrecognized_csi": false,
00:26:45.405  "method": "bdev_nvme_attach_controller",
00:26:45.405  "req_id": 1
00:26:45.405  }
00:26:45.405  Got JSON-RPC error response
00:26:45.405  response:
00:26:45.405  {
00:26:45.405  "code": -114,
00:26:45.405  "message": "A controller named NVMe0 already exists with the specified network path"
00:26:45.405  }
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:45.405    00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:45.405  request:
00:26:45.405  {
00:26:45.405  "name": "NVMe0",
00:26:45.405  "trtype": "tcp",
00:26:45.405  "traddr": "10.0.0.2",
00:26:45.405  "adrfam": "ipv4",
00:26:45.405  "trsvcid": "4420",
00:26:45.405  "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:26:45.405  "hostaddr": "10.0.0.1",
00:26:45.405  "prchk_reftag": false,
00:26:45.405  "prchk_guard": false,
00:26:45.405  "hdgst": false,
00:26:45.405  "ddgst": false,
00:26:45.405  "allow_unrecognized_csi": false,
00:26:45.405  "method": "bdev_nvme_attach_controller",
00:26:45.405  "req_id": 1
00:26:45.405  }
00:26:45.405  Got JSON-RPC error response
00:26:45.405  response:
00:26:45.405  {
00:26:45.405  "code": -114,
00:26:45.405  "message": "A controller named NVMe0 already exists with the specified network path"
00:26:45.405  }
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:45.405    00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:45.405  request:
00:26:45.405  {
00:26:45.405  "name": "NVMe0",
00:26:45.405  "trtype": "tcp",
00:26:45.405  "traddr": "10.0.0.2",
00:26:45.405  "adrfam": "ipv4",
00:26:45.405  "trsvcid": "4420",
00:26:45.405  "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:26:45.405  "hostaddr": "10.0.0.1",
00:26:45.405  "prchk_reftag": false,
00:26:45.405  "prchk_guard": false,
00:26:45.405  "hdgst": false,
00:26:45.405  "ddgst": false,
00:26:45.405  "multipath": "disable",
00:26:45.405  "allow_unrecognized_csi": false,
00:26:45.405  "method": "bdev_nvme_attach_controller",
00:26:45.405  "req_id": 1
00:26:45.405  }
00:26:45.405  Got JSON-RPC error response
00:26:45.405  response:
00:26:45.405  {
00:26:45.405  "code": -114,
00:26:45.405  "message": "A controller named NVMe0 already exists and multipath is disabled"
00:26:45.405  }
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:45.405    00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:45.405  request:
00:26:45.405  {
00:26:45.405  "name": "NVMe0",
00:26:45.405  "trtype": "tcp",
00:26:45.405  "traddr": "10.0.0.2",
00:26:45.405  "adrfam": "ipv4",
00:26:45.405  "trsvcid": "4420",
00:26:45.405  "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:26:45.405  "hostaddr": "10.0.0.1",
00:26:45.405  "prchk_reftag": false,
00:26:45.405  "prchk_guard": false,
00:26:45.405  "hdgst": false,
00:26:45.405  "ddgst": false,
00:26:45.405  "multipath": "failover",
00:26:45.405  "allow_unrecognized_csi": false,
00:26:45.405  "method": "bdev_nvme_attach_controller",
00:26:45.405  "req_id": 1
00:26:45.405  }
00:26:45.405  Got JSON-RPC error response
00:26:45.405  response:
00:26:45.405  {
00:26:45.405  "code": -114,
00:26:45.405  "message": "A controller named NVMe0 already exists with the specified network path"
00:26:45.405  }
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:45.405  NVMe0n1
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:45.405   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:45.406   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1
00:26:45.406   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:45.406   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:45.663  
00:26:45.663   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:45.663    00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:26:45.663    00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe
00:26:45.663    00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:45.663    00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:45.663    00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:45.663   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']'
00:26:45.663   00:08:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:26:47.091  {
00:26:47.091    "results": [
00:26:47.091      {
00:26:47.091        "job": "NVMe0n1",
00:26:47.091        "core_mask": "0x1",
00:26:47.091        "workload": "write",
00:26:47.091        "status": "finished",
00:26:47.091        "queue_depth": 128,
00:26:47.091        "io_size": 4096,
00:26:47.091        "runtime": 1.006655,
00:26:47.091        "iops": 23855.243355469353,
00:26:47.091        "mibps": 93.18454435730216,
00:26:47.091        "io_failed": 0,
00:26:47.091        "io_timeout": 0,
00:26:47.091        "avg_latency_us": 5348.8447726128015,
00:26:47.091        "min_latency_us": 4213.028571428571,
00:26:47.091        "max_latency_us": 12170.971428571429
00:26:47.091      }
00:26:47.091    ],
00:26:47.091    "core_count": 1
00:26:47.091  }
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]]
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3160530
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3160530 ']'
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3160530
00:26:47.091    00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:47.091    00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3160530
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3160530'
00:26:47.091  killing process with pid 3160530
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3160530
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3160530
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file
00:26:47.091    00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f
00:26:47.091    00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u
00:26:47.091   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat
00:26:47.091  --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt ---
00:26:47.091  [2024-12-10 00:08:00.682226] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:26:47.092  [2024-12-10 00:08:00.682272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160530 ]
00:26:47.092  [2024-12-10 00:08:00.754897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:26:47.092  [2024-12-10 00:08:00.794525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:26:47.092  [2024-12-10 00:08:01.423419] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 3b09ea98-fa67-43d7-8a9a-f00b9a9711c4 already exists
00:26:47.092  [2024-12-10 00:08:01.423444] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:3b09ea98-fa67-43d7-8a9a-f00b9a9711c4 alias for bdev NVMe1n1
00:26:47.092  [2024-12-10 00:08:01.423452] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed
00:26:47.092  Running I/O for 1 seconds...
00:26:47.092      23854.00 IOPS,    93.18 MiB/s
00:26:47.092                                                                                                  Latency(us)
00:26:47.092  
[2024-12-09T23:08:02.949Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:26:47.092  Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096)
00:26:47.092  	 NVMe0n1             :       1.01   23855.24      93.18       0.00     0.00    5348.84    4213.03   12170.97
00:26:47.092  
[2024-12-09T23:08:02.949Z]  ===================================================================================================================
00:26:47.092  
[2024-12-09T23:08:02.949Z]  Total                       :              23855.24      93.18       0.00     0.00    5348.84    4213.03   12170.97
00:26:47.092  Received shutdown signal, test time was about 1.000000 seconds
00:26:47.092  
00:26:47.092                                                                                                  Latency(us)
00:26:47.092  
[2024-12-09T23:08:02.949Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:26:47.092  
[2024-12-09T23:08:02.949Z]  ===================================================================================================================
00:26:47.092  
[2024-12-09T23:08:02.949Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:26:47.092  --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt ---
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20}
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:26:47.092  rmmod nvme_tcp
00:26:47.092  rmmod nvme_fabrics
00:26:47.092  rmmod nvme_keyring
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3160462 ']'
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3160462
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3160462 ']'
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3160462
00:26:47.092    00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname
00:26:47.092   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:47.092    00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3160462
00:26:47.386   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:26:47.386   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:26:47.386   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3160462'
00:26:47.386  killing process with pid 3160462
00:26:47.386   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3160462
00:26:47.386   00:08:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3160462
00:26:47.386   00:08:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:26:47.386   00:08:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:26:47.386   00:08:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:26:47.386   00:08:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr
00:26:47.386   00:08:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save
00:26:47.386   00:08:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:26:47.386   00:08:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore
00:26:47.386   00:08:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:26:47.386   00:08:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns
00:26:47.386   00:08:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:47.386   00:08:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:47.386    00:08:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:49.936   00:08:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:26:49.936  
00:26:49.936  real	0m11.761s
00:26:49.936  user	0m14.290s
00:26:49.936  sys	0m5.214s
00:26:49.936   00:08:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:49.936   00:08:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x
00:26:49.936  ************************************
00:26:49.936  END TEST nvmf_multicontroller
00:26:49.936  ************************************
00:26:49.936   00:08:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp
00:26:49.936   00:08:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:26:49.936   00:08:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:49.936   00:08:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:26:49.936  ************************************
00:26:49.936  START TEST nvmf_aer
00:26:49.936  ************************************
00:26:49.936   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp
00:26:49.936  * Looking for test storage...
00:26:49.936  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:49.936     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version
00:26:49.936     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-:
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-:
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<'
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:49.936    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:49.936     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1
00:26:49.936     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1
00:26:49.936     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:49.936     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1
00:26:49.937     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2
00:26:49.937     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2
00:26:49.937     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:49.937     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:49.937  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:49.937  		--rc genhtml_branch_coverage=1
00:26:49.937  		--rc genhtml_function_coverage=1
00:26:49.937  		--rc genhtml_legend=1
00:26:49.937  		--rc geninfo_all_blocks=1
00:26:49.937  		--rc geninfo_unexecuted_blocks=1
00:26:49.937  		
00:26:49.937  		'
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:49.937  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:49.937  		--rc genhtml_branch_coverage=1
00:26:49.937  		--rc genhtml_function_coverage=1
00:26:49.937  		--rc genhtml_legend=1
00:26:49.937  		--rc geninfo_all_blocks=1
00:26:49.937  		--rc geninfo_unexecuted_blocks=1
00:26:49.937  		
00:26:49.937  		'
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:49.937  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:49.937  		--rc genhtml_branch_coverage=1
00:26:49.937  		--rc genhtml_function_coverage=1
00:26:49.937  		--rc genhtml_legend=1
00:26:49.937  		--rc geninfo_all_blocks=1
00:26:49.937  		--rc geninfo_unexecuted_blocks=1
00:26:49.937  		
00:26:49.937  		'
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:49.937  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:49.937  		--rc genhtml_branch_coverage=1
00:26:49.937  		--rc genhtml_function_coverage=1
00:26:49.937  		--rc genhtml_legend=1
00:26:49.937  		--rc geninfo_all_blocks=1
00:26:49.937  		--rc geninfo_unexecuted_blocks=1
00:26:49.937  		
00:26:49.937  		'
00:26:49.937   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:26:49.937     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:49.937     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:26:49.937     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob
00:26:49.937     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:49.937     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:49.937     00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:49.937      00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:49.937      00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:49.937      00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:49.937      00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH
00:26:49.937      00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:26:49.937  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0
00:26:49.937   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit
00:26:49.937   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:26:49.937   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:49.937   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:49.937   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:49.937   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:49.937   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:49.937   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:49.937    00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:49.937   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:26:49.937   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:26:49.937   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable
00:26:49.937   00:08:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=()
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=()
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=()
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=()
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=()
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=()
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=()
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:26:55.216  Found 0000:af:00.0 (0x8086 - 0x159b)
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:26:55.216  Found 0000:af:00.1 (0x8086 - 0x159b)
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:26:55.216  Found net devices under 0000:af:00.0: cvl_0_0
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:26:55.216   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]]
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:26:55.217  Found net devices under 0000:af:00.1: cvl_0_1
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:26:55.217   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:26:55.476  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:26:55.476  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms
00:26:55.476  
00:26:55.476  --- 10.0.0.2 ping statistics ---
00:26:55.476  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:55.476  rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:26:55.476  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:26:55.476  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms
00:26:55.476  
00:26:55.476  --- 10.0.0.1 ping statistics ---
00:26:55.476  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:26:55.476  rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:26:55.476   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:26:55.477   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:26:55.477   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:26:55.477   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF
00:26:55.477   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:26:55.477   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable
00:26:55.477   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:55.741   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3164454
00:26:55.741   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3164454
00:26:55.741   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:26:55.741   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3164454 ']'
00:26:55.741   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:26:55.741   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100
00:26:55.741   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:26:55.741  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:26:55.741   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable
00:26:55.741   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:55.741  [2024-12-10 00:08:11.392265] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:26:55.741  [2024-12-10 00:08:11.392315] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:26:55.741  [2024-12-10 00:08:11.468200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:26:55.741  [2024-12-10 00:08:11.508260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:26:55.741  [2024-12-10 00:08:11.508298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:26:55.742  [2024-12-10 00:08:11.508305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:26:55.742  [2024-12-10 00:08:11.508312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:26:55.742  [2024-12-10 00:08:11.508318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:26:55.742  [2024-12-10 00:08:11.509666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:26:55.742  [2024-12-10 00:08:11.509773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:26:55.742  [2024-12-10 00:08:11.509883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:26:55.742  [2024-12-10 00:08:11.509884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:56.001  [2024-12-10 00:08:11.659711] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:56.001  Malloc0
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:56.001  [2024-12-10 00:08:11.723007] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:56.001  [
00:26:56.001  {
00:26:56.001  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:26:56.001  "subtype": "Discovery",
00:26:56.001  "listen_addresses": [],
00:26:56.001  "allow_any_host": true,
00:26:56.001  "hosts": []
00:26:56.001  },
00:26:56.001  {
00:26:56.001  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:26:56.001  "subtype": "NVMe",
00:26:56.001  "listen_addresses": [
00:26:56.001  {
00:26:56.001  "trtype": "TCP",
00:26:56.001  "adrfam": "IPv4",
00:26:56.001  "traddr": "10.0.0.2",
00:26:56.001  "trsvcid": "4420"
00:26:56.001  }
00:26:56.001  ],
00:26:56.001  "allow_any_host": true,
00:26:56.001  "hosts": [],
00:26:56.001  "serial_number": "SPDK00000000000001",
00:26:56.001  "model_number": "SPDK bdev Controller",
00:26:56.001  "max_namespaces": 2,
00:26:56.001  "min_cntlid": 1,
00:26:56.001  "max_cntlid": 65519,
00:26:56.001  "namespaces": [
00:26:56.001  {
00:26:56.001  "nsid": 1,
00:26:56.001  "bdev_name": "Malloc0",
00:26:56.001  "name": "Malloc0",
00:26:56.001  "nguid": "1E1D2251A678482BAD9A3269C24F42CD",
00:26:56.001  "uuid": "1e1d2251-a678-482b-ad9a-3269c24f42cd"
00:26:56.001  }
00:26:56.001  ]
00:26:56.001  }
00:26:56.001  ]
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:56.001   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file
00:26:56.002   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file
00:26:56.002   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3164480
00:26:56.002   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file
00:26:56.002   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file
00:26:56.002   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0
00:26:56.002   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:26:56.002   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']'
00:26:56.002   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1
00:26:56.002   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1
00:26:56.002   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:26:56.002   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']'
00:26:56.002   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2
00:26:56.002   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1
00:26:56.261   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:26:56.261   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']'
00:26:56.261   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0
00:26:56.261   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1
00:26:56.261   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:56.261   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:56.261  Malloc1
00:26:56.261   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:56.261   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2
00:26:56.261   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:56.261   00:08:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:56.261   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:56.261   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems
00:26:56.261   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:56.261   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:56.261  Asynchronous Event Request test
00:26:56.261  Attaching to 10.0.0.2
00:26:56.261  Attached to 10.0.0.2
00:26:56.261  Registering asynchronous event callbacks...
00:26:56.261  Starting namespace attribute notice tests for all controllers...
00:26:56.261  10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00
00:26:56.261  aer_cb - Changed Namespace
00:26:56.261  Cleaning up...
00:26:56.261  [
00:26:56.261  {
00:26:56.261  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:26:56.261  "subtype": "Discovery",
00:26:56.261  "listen_addresses": [],
00:26:56.261  "allow_any_host": true,
00:26:56.261  "hosts": []
00:26:56.261  },
00:26:56.261  {
00:26:56.261  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:26:56.261  "subtype": "NVMe",
00:26:56.261  "listen_addresses": [
00:26:56.261  {
00:26:56.261  "trtype": "TCP",
00:26:56.261  "adrfam": "IPv4",
00:26:56.261  "traddr": "10.0.0.2",
00:26:56.261  "trsvcid": "4420"
00:26:56.261  }
00:26:56.261  ],
00:26:56.261  "allow_any_host": true,
00:26:56.261  "hosts": [],
00:26:56.261  "serial_number": "SPDK00000000000001",
00:26:56.261  "model_number": "SPDK bdev Controller",
00:26:56.261  "max_namespaces": 2,
00:26:56.261  "min_cntlid": 1,
00:26:56.261  "max_cntlid": 65519,
00:26:56.261  "namespaces": [
00:26:56.261  {
00:26:56.261  "nsid": 1,
00:26:56.261  "bdev_name": "Malloc0",
00:26:56.261  "name": "Malloc0",
00:26:56.261  "nguid": "1E1D2251A678482BAD9A3269C24F42CD",
00:26:56.261  "uuid": "1e1d2251-a678-482b-ad9a-3269c24f42cd"
00:26:56.261  },
00:26:56.261  {
00:26:56.261  "nsid": 2,
00:26:56.261  "bdev_name": "Malloc1",
00:26:56.261  "name": "Malloc1",
00:26:56.261  "nguid": "915AE544D32A4A3488E119935D17F5DF",
00:26:56.261  "uuid": "915ae544-d32a-4a34-88e1-19935d17f5df"
00:26:56.261  }
00:26:56.261  ]
00:26:56.261  }
00:26:56.261  ]
00:26:56.261   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:56.261   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3164480
00:26:56.261   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0
00:26:56.261   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:56.261   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:56.261   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20}
00:26:56.262   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:26:56.262  rmmod nvme_tcp
00:26:56.262  rmmod nvme_fabrics
00:26:56.521  rmmod nvme_keyring
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3164454 ']'
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3164454
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3164454 ']'
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3164454
00:26:56.521    00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:26:56.521    00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3164454
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3164454'
00:26:56.521  killing process with pid 3164454
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3164454
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3164454
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:26:56.521   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore
00:26:56.780   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:26:56.780   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns
00:26:56.780   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:56.780   00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:56.780    00:08:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:58.686   00:08:14 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:26:58.686  
00:26:58.686  real	0m9.180s
00:26:58.686  user	0m5.166s
00:26:58.686  sys	0m4.796s
00:26:58.686   00:08:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable
00:26:58.686   00:08:14 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x
00:26:58.686  ************************************
00:26:58.686  END TEST nvmf_aer
00:26:58.686  ************************************
00:26:58.686   00:08:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp
00:26:58.686   00:08:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:26:58.686   00:08:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:26:58.686   00:08:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:26:58.686  ************************************
00:26:58.686  START TEST nvmf_async_init
00:26:58.686  ************************************
00:26:58.686   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp
00:26:58.946  * Looking for test storage...
00:26:58.947  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-:
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-:
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<'
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 ))
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:26:58.947  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:58.947  		--rc genhtml_branch_coverage=1
00:26:58.947  		--rc genhtml_function_coverage=1
00:26:58.947  		--rc genhtml_legend=1
00:26:58.947  		--rc geninfo_all_blocks=1
00:26:58.947  		--rc geninfo_unexecuted_blocks=1
00:26:58.947  		
00:26:58.947  		'
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:26:58.947  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:58.947  		--rc genhtml_branch_coverage=1
00:26:58.947  		--rc genhtml_function_coverage=1
00:26:58.947  		--rc genhtml_legend=1
00:26:58.947  		--rc geninfo_all_blocks=1
00:26:58.947  		--rc geninfo_unexecuted_blocks=1
00:26:58.947  		
00:26:58.947  		'
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:26:58.947  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:58.947  		--rc genhtml_branch_coverage=1
00:26:58.947  		--rc genhtml_function_coverage=1
00:26:58.947  		--rc genhtml_legend=1
00:26:58.947  		--rc geninfo_all_blocks=1
00:26:58.947  		--rc geninfo_unexecuted_blocks=1
00:26:58.947  		
00:26:58.947  		'
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:26:58.947  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:26:58.947  		--rc genhtml_branch_coverage=1
00:26:58.947  		--rc genhtml_function_coverage=1
00:26:58.947  		--rc genhtml_legend=1
00:26:58.947  		--rc geninfo_all_blocks=1
00:26:58.947  		--rc geninfo_unexecuted_blocks=1
00:26:58.947  		
00:26:58.947  		'
00:26:58.947   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:26:58.947    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:26:58.947     00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:26:58.947      00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:58.948      00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:58.948      00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:58.948      00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH
00:26:58.948      00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:26:58.948    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0
00:26:58.948    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:26:58.948    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:26:58.948    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:26:58.948    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:26:58.948    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:26:58.948    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:26:58.948  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:26:58.948    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:26:58.948    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:26:58.948    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0
00:26:58.948    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen
00:26:58.948    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d -
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=13d1fcfa794a4d739ca9af7ef35be848
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:26:58.948    00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable
00:26:58.948   00:08:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=()
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=()
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=()
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=()
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=()
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=()
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=()
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:27:05.519   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:27:05.520  Found 0000:af:00.0 (0x8086 - 0x159b)
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:27:05.520  Found 0000:af:00.1 (0x8086 - 0x159b)
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:27:05.520  Found net devices under 0000:af:00.0: cvl_0_0
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:27:05.520  Found net devices under 0000:af:00.1: cvl_0_1
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:27:05.520  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:27:05.520  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms
00:27:05.520  
00:27:05.520  --- 10.0.0.2 ping statistics ---
00:27:05.520  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:05.520  rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:27:05.520  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:27:05.520  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms
00:27:05.520  
00:27:05.520  --- 10.0.0.1 ping statistics ---
00:27:05.520  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:05.520  rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:27:05.520   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3168156
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3168156
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3168156 ']'
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:05.521  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:05.521   00:08:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.521  [2024-12-10 00:08:20.814363] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:27:05.521  [2024-12-10 00:08:20.814409] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:05.521  [2024-12-10 00:08:20.890681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:27:05.521  [2024-12-10 00:08:20.928023] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:27:05.521  [2024-12-10 00:08:20.928058] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:27:05.521  [2024-12-10 00:08:20.928065] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:27:05.521  [2024-12-10 00:08:20.928071] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:27:05.521  [2024-12-10 00:08:20.928077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:27:05.521  [2024-12-10 00:08:20.928608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.521  [2024-12-10 00:08:21.076490] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.521  null0
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 13d1fcfa794a4d739ca9af7ef35be848
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.521  [2024-12-10 00:08:21.120725] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.521  nvme0n1
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.521   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.521  [
00:27:05.521  {
00:27:05.521  "name": "nvme0n1",
00:27:05.521  "aliases": [
00:27:05.521  "13d1fcfa-794a-4d73-9ca9-af7ef35be848"
00:27:05.521  ],
00:27:05.521  "product_name": "NVMe disk",
00:27:05.521  "block_size": 512,
00:27:05.521  "num_blocks": 2097152,
00:27:05.521  "uuid": "13d1fcfa-794a-4d73-9ca9-af7ef35be848",
00:27:05.521  "numa_id": 1,
00:27:05.521  "assigned_rate_limits": {
00:27:05.521  "rw_ios_per_sec": 0,
00:27:05.521  "rw_mbytes_per_sec": 0,
00:27:05.521  "r_mbytes_per_sec": 0,
00:27:05.521  "w_mbytes_per_sec": 0
00:27:05.521  },
00:27:05.521  "claimed": false,
00:27:05.521  "zoned": false,
00:27:05.521  "supported_io_types": {
00:27:05.521  "read": true,
00:27:05.521  "write": true,
00:27:05.521  "unmap": false,
00:27:05.521  "flush": true,
00:27:05.521  "reset": true,
00:27:05.521  "nvme_admin": true,
00:27:05.521  "nvme_io": true,
00:27:05.521  "nvme_io_md": false,
00:27:05.521  "write_zeroes": true,
00:27:05.521  "zcopy": false,
00:27:05.521  "get_zone_info": false,
00:27:05.521  "zone_management": false,
00:27:05.521  "zone_append": false,
00:27:05.521  "compare": true,
00:27:05.521  "compare_and_write": true,
00:27:05.521  "abort": true,
00:27:05.521  "seek_hole": false,
00:27:05.521  "seek_data": false,
00:27:05.521  "copy": true,
00:27:05.521  "nvme_iov_md": false
00:27:05.522  },
00:27:05.522  "memory_domains": [
00:27:05.522  {
00:27:05.522  "dma_device_id": "system",
00:27:05.522  "dma_device_type": 1
00:27:05.522  }
00:27:05.522  ],
00:27:05.522  "driver_specific": {
00:27:05.522  "nvme": [
00:27:05.522  {
00:27:05.522  "trid": {
00:27:05.522  "trtype": "TCP",
00:27:05.522  "adrfam": "IPv4",
00:27:05.522  "traddr": "10.0.0.2",
00:27:05.522  "trsvcid": "4420",
00:27:05.522  "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:27:05.522  },
00:27:05.522  "ctrlr_data": {
00:27:05.522  "cntlid": 1,
00:27:05.522  "vendor_id": "0x8086",
00:27:05.522  "model_number": "SPDK bdev Controller",
00:27:05.522  "serial_number": "00000000000000000000",
00:27:05.522  "firmware_revision": "25.01",
00:27:05.522  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:27:05.522  "oacs": {
00:27:05.522  "security": 0,
00:27:05.522  "format": 0,
00:27:05.522  "firmware": 0,
00:27:05.522  "ns_manage": 0
00:27:05.522  },
00:27:05.522  "multi_ctrlr": true,
00:27:05.780  "ana_reporting": false
00:27:05.780  },
00:27:05.780  "vs": {
00:27:05.781  "nvme_version": "1.3"
00:27:05.781  },
00:27:05.781  "ns_data": {
00:27:05.781  "id": 1,
00:27:05.781  "can_share": true
00:27:05.781  }
00:27:05.781  }
00:27:05.781  ],
00:27:05.781  "mp_policy": "active_passive"
00:27:05.781  }
00:27:05.781  }
00:27:05.781  ]
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.781  [2024-12-10 00:08:21.382218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:27:05.781  [2024-12-10 00:08:21.382274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f8250 (9): Bad file descriptor
00:27:05.781  [2024-12-10 00:08:21.514242] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful.
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.781  [
00:27:05.781  {
00:27:05.781  "name": "nvme0n1",
00:27:05.781  "aliases": [
00:27:05.781  "13d1fcfa-794a-4d73-9ca9-af7ef35be848"
00:27:05.781  ],
00:27:05.781  "product_name": "NVMe disk",
00:27:05.781  "block_size": 512,
00:27:05.781  "num_blocks": 2097152,
00:27:05.781  "uuid": "13d1fcfa-794a-4d73-9ca9-af7ef35be848",
00:27:05.781  "numa_id": 1,
00:27:05.781  "assigned_rate_limits": {
00:27:05.781  "rw_ios_per_sec": 0,
00:27:05.781  "rw_mbytes_per_sec": 0,
00:27:05.781  "r_mbytes_per_sec": 0,
00:27:05.781  "w_mbytes_per_sec": 0
00:27:05.781  },
00:27:05.781  "claimed": false,
00:27:05.781  "zoned": false,
00:27:05.781  "supported_io_types": {
00:27:05.781  "read": true,
00:27:05.781  "write": true,
00:27:05.781  "unmap": false,
00:27:05.781  "flush": true,
00:27:05.781  "reset": true,
00:27:05.781  "nvme_admin": true,
00:27:05.781  "nvme_io": true,
00:27:05.781  "nvme_io_md": false,
00:27:05.781  "write_zeroes": true,
00:27:05.781  "zcopy": false,
00:27:05.781  "get_zone_info": false,
00:27:05.781  "zone_management": false,
00:27:05.781  "zone_append": false,
00:27:05.781  "compare": true,
00:27:05.781  "compare_and_write": true,
00:27:05.781  "abort": true,
00:27:05.781  "seek_hole": false,
00:27:05.781  "seek_data": false,
00:27:05.781  "copy": true,
00:27:05.781  "nvme_iov_md": false
00:27:05.781  },
00:27:05.781  "memory_domains": [
00:27:05.781  {
00:27:05.781  "dma_device_id": "system",
00:27:05.781  "dma_device_type": 1
00:27:05.781  }
00:27:05.781  ],
00:27:05.781  "driver_specific": {
00:27:05.781  "nvme": [
00:27:05.781  {
00:27:05.781  "trid": {
00:27:05.781  "trtype": "TCP",
00:27:05.781  "adrfam": "IPv4",
00:27:05.781  "traddr": "10.0.0.2",
00:27:05.781  "trsvcid": "4420",
00:27:05.781  "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:27:05.781  },
00:27:05.781  "ctrlr_data": {
00:27:05.781  "cntlid": 2,
00:27:05.781  "vendor_id": "0x8086",
00:27:05.781  "model_number": "SPDK bdev Controller",
00:27:05.781  "serial_number": "00000000000000000000",
00:27:05.781  "firmware_revision": "25.01",
00:27:05.781  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:27:05.781  "oacs": {
00:27:05.781  "security": 0,
00:27:05.781  "format": 0,
00:27:05.781  "firmware": 0,
00:27:05.781  "ns_manage": 0
00:27:05.781  },
00:27:05.781  "multi_ctrlr": true,
00:27:05.781  "ana_reporting": false
00:27:05.781  },
00:27:05.781  "vs": {
00:27:05.781  "nvme_version": "1.3"
00:27:05.781  },
00:27:05.781  "ns_data": {
00:27:05.781  "id": 1,
00:27:05.781  "can_share": true
00:27:05.781  }
00:27:05.781  }
00:27:05.781  ],
00:27:05.781  "mp_policy": "active_passive"
00:27:05.781  }
00:27:05.781  }
00:27:05.781  ]
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.781    00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.SrtgUWUYCy
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ:
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.SrtgUWUYCy
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.SrtgUWUYCy
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable
00:27:05.781   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.782   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.782   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.782   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel
00:27:05.782   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.782   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.782  [2024-12-10 00:08:21.586810] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:27:05.782  [2024-12-10 00:08:21.586909] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:27:05.782   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.782   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0
00:27:05.782   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.782   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.782   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:05.782   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0
00:27:05.782   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:05.782   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:05.782  [2024-12-10 00:08:21.602868] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:27:06.041  nvme0n1
00:27:06.041   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:06.041   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1
00:27:06.041   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:06.041   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:06.041  [
00:27:06.041  {
00:27:06.041  "name": "nvme0n1",
00:27:06.041  "aliases": [
00:27:06.041  "13d1fcfa-794a-4d73-9ca9-af7ef35be848"
00:27:06.041  ],
00:27:06.041  "product_name": "NVMe disk",
00:27:06.041  "block_size": 512,
00:27:06.041  "num_blocks": 2097152,
00:27:06.041  "uuid": "13d1fcfa-794a-4d73-9ca9-af7ef35be848",
00:27:06.041  "numa_id": 1,
00:27:06.041  "assigned_rate_limits": {
00:27:06.041  "rw_ios_per_sec": 0,
00:27:06.041  "rw_mbytes_per_sec": 0,
00:27:06.041  "r_mbytes_per_sec": 0,
00:27:06.041  "w_mbytes_per_sec": 0
00:27:06.041  },
00:27:06.041  "claimed": false,
00:27:06.041  "zoned": false,
00:27:06.041  "supported_io_types": {
00:27:06.041  "read": true,
00:27:06.041  "write": true,
00:27:06.041  "unmap": false,
00:27:06.041  "flush": true,
00:27:06.041  "reset": true,
00:27:06.041  "nvme_admin": true,
00:27:06.041  "nvme_io": true,
00:27:06.041  "nvme_io_md": false,
00:27:06.041  "write_zeroes": true,
00:27:06.041  "zcopy": false,
00:27:06.041  "get_zone_info": false,
00:27:06.041  "zone_management": false,
00:27:06.041  "zone_append": false,
00:27:06.041  "compare": true,
00:27:06.041  "compare_and_write": true,
00:27:06.041  "abort": true,
00:27:06.041  "seek_hole": false,
00:27:06.041  "seek_data": false,
00:27:06.041  "copy": true,
00:27:06.041  "nvme_iov_md": false
00:27:06.041  },
00:27:06.041  "memory_domains": [
00:27:06.041  {
00:27:06.041  "dma_device_id": "system",
00:27:06.041  "dma_device_type": 1
00:27:06.041  }
00:27:06.041  ],
00:27:06.041  "driver_specific": {
00:27:06.042  "nvme": [
00:27:06.042  {
00:27:06.042  "trid": {
00:27:06.042  "trtype": "TCP",
00:27:06.042  "adrfam": "IPv4",
00:27:06.042  "traddr": "10.0.0.2",
00:27:06.042  "trsvcid": "4421",
00:27:06.042  "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:27:06.042  },
00:27:06.042  "ctrlr_data": {
00:27:06.042  "cntlid": 3,
00:27:06.042  "vendor_id": "0x8086",
00:27:06.042  "model_number": "SPDK bdev Controller",
00:27:06.042  "serial_number": "00000000000000000000",
00:27:06.042  "firmware_revision": "25.01",
00:27:06.042  "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:27:06.042  "oacs": {
00:27:06.042  "security": 0,
00:27:06.042  "format": 0,
00:27:06.042  "firmware": 0,
00:27:06.042  "ns_manage": 0
00:27:06.042  },
00:27:06.042  "multi_ctrlr": true,
00:27:06.042  "ana_reporting": false
00:27:06.042  },
00:27:06.042  "vs": {
00:27:06.042  "nvme_version": "1.3"
00:27:06.042  },
00:27:06.042  "ns_data": {
00:27:06.042  "id": 1,
00:27:06.042  "can_share": true
00:27:06.042  }
00:27:06.042  }
00:27:06.042  ],
00:27:06.042  "mp_policy": "active_passive"
00:27:06.042  }
00:27:06.042  }
00:27:06.042  ]
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.SrtgUWUYCy
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20}
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:27:06.042  rmmod nvme_tcp
00:27:06.042  rmmod nvme_fabrics
00:27:06.042  rmmod nvme_keyring
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3168156 ']'
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3168156
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3168156 ']'
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3168156
00:27:06.042    00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:06.042    00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3168156
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3168156'
00:27:06.042  killing process with pid 3168156
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3168156
00:27:06.042   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3168156
00:27:06.301   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:27:06.301   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:27:06.301   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:27:06.301   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr
00:27:06.301   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save
00:27:06.301   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:27:06.301   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore
00:27:06.301   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:27:06.301   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns
00:27:06.301   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:06.301   00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:06.301    00:08:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:08.207   00:08:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:27:08.207  
00:27:08.207  real	0m9.506s
00:27:08.207  user	0m2.999s
00:27:08.207  sys	0m4.837s
00:27:08.207   00:08:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:08.207   00:08:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x
00:27:08.207  ************************************
00:27:08.207  END TEST nvmf_async_init
00:27:08.207  ************************************
00:27:08.467   00:08:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp
00:27:08.467   00:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:27:08.467   00:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:08.467   00:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:27:08.467  ************************************
00:27:08.467  START TEST dma
00:27:08.467  ************************************
00:27:08.467   00:08:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp
00:27:08.467  * Looking for test storage...
00:27:08.467  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:27:08.467     00:08:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version
00:27:08.467     00:08:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-:
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-:
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<'
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:08.467     00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1
00:27:08.467     00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1
00:27:08.467     00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:08.467     00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1
00:27:08.467    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1
00:27:08.468     00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2
00:27:08.468     00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2
00:27:08.468     00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:08.468     00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:27:08.468  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:08.468  		--rc genhtml_branch_coverage=1
00:27:08.468  		--rc genhtml_function_coverage=1
00:27:08.468  		--rc genhtml_legend=1
00:27:08.468  		--rc geninfo_all_blocks=1
00:27:08.468  		--rc geninfo_unexecuted_blocks=1
00:27:08.468  		
00:27:08.468  		'
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:27:08.468  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:08.468  		--rc genhtml_branch_coverage=1
00:27:08.468  		--rc genhtml_function_coverage=1
00:27:08.468  		--rc genhtml_legend=1
00:27:08.468  		--rc geninfo_all_blocks=1
00:27:08.468  		--rc geninfo_unexecuted_blocks=1
00:27:08.468  		
00:27:08.468  		'
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:27:08.468  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:08.468  		--rc genhtml_branch_coverage=1
00:27:08.468  		--rc genhtml_function_coverage=1
00:27:08.468  		--rc genhtml_legend=1
00:27:08.468  		--rc geninfo_all_blocks=1
00:27:08.468  		--rc geninfo_unexecuted_blocks=1
00:27:08.468  		
00:27:08.468  		'
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:27:08.468  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:08.468  		--rc genhtml_branch_coverage=1
00:27:08.468  		--rc genhtml_function_coverage=1
00:27:08.468  		--rc genhtml_legend=1
00:27:08.468  		--rc geninfo_all_blocks=1
00:27:08.468  		--rc geninfo_unexecuted_blocks=1
00:27:08.468  		
00:27:08.468  		'
00:27:08.468   00:08:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:27:08.468     00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:27:08.468     00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:27:08.468    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:27:08.468     00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob
00:27:08.468     00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:08.468     00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:08.468     00:08:24 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:08.468      00:08:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:08.468      00:08:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:08.468      00:08:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:08.468      00:08:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH
00:27:08.469      00:08:24 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:08.469    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0
00:27:08.469    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:27:08.469    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:27:08.469    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:27:08.469    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:27:08.469    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:27:08.469    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:27:08.469  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:27:08.469    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:27:08.469    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:27:08.469    00:08:24 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0
00:27:08.728   00:08:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']'
00:27:08.728   00:08:24 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0
00:27:08.728  
00:27:08.728  real	0m0.204s
00:27:08.728  user	0m0.128s
00:27:08.728  sys	0m0.089s
00:27:08.728   00:08:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:08.728   00:08:24 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x
00:27:08.728  ************************************
00:27:08.728  END TEST dma
00:27:08.728  ************************************
00:27:08.728   00:08:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp
00:27:08.728   00:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:27:08.728   00:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:08.728   00:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:27:08.728  ************************************
00:27:08.728  START TEST nvmf_identify
00:27:08.728  ************************************
00:27:08.728   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp
00:27:08.728  * Looking for test storage...
00:27:08.728  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:27:08.728     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version
00:27:08.728     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-:
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-:
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<'
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:08.728     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1
00:27:08.728     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1
00:27:08.728     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:08.728     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1
00:27:08.728     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2
00:27:08.728     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2
00:27:08.728     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:08.728     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0
00:27:08.728    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:27:08.729  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:08.729  		--rc genhtml_branch_coverage=1
00:27:08.729  		--rc genhtml_function_coverage=1
00:27:08.729  		--rc genhtml_legend=1
00:27:08.729  		--rc geninfo_all_blocks=1
00:27:08.729  		--rc geninfo_unexecuted_blocks=1
00:27:08.729  		
00:27:08.729  		'
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:27:08.729  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:08.729  		--rc genhtml_branch_coverage=1
00:27:08.729  		--rc genhtml_function_coverage=1
00:27:08.729  		--rc genhtml_legend=1
00:27:08.729  		--rc geninfo_all_blocks=1
00:27:08.729  		--rc geninfo_unexecuted_blocks=1
00:27:08.729  		
00:27:08.729  		'
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:27:08.729  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:08.729  		--rc genhtml_branch_coverage=1
00:27:08.729  		--rc genhtml_function_coverage=1
00:27:08.729  		--rc genhtml_legend=1
00:27:08.729  		--rc geninfo_all_blocks=1
00:27:08.729  		--rc geninfo_unexecuted_blocks=1
00:27:08.729  		
00:27:08.729  		'
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:27:08.729  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:08.729  		--rc genhtml_branch_coverage=1
00:27:08.729  		--rc genhtml_function_coverage=1
00:27:08.729  		--rc genhtml_legend=1
00:27:08.729  		--rc geninfo_all_blocks=1
00:27:08.729  		--rc geninfo_unexecuted_blocks=1
00:27:08.729  		
00:27:08.729  		'
00:27:08.729   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:27:08.729     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:27:08.729     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:27:08.729    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:27:08.988     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob
00:27:08.988     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:08.988     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:08.988     00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:08.988      00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:08.988      00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:08.988      00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:08.988      00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH
00:27:08.988      00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:08.988    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0
00:27:08.988    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:27:08.988    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:27:08.988    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:27:08.988    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:27:08.988    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:27:08.988    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:27:08.988  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:27:08.988    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:27:08.988    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:27:08.988    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:08.988    00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable
00:27:08.988   00:08:24 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=()
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=()
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=()
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=()
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=()
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=()
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=()
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:27:15.572   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:27:15.573  Found 0000:af:00.0 (0x8086 - 0x159b)
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:27:15.573  Found 0000:af:00.1 (0x8086 - 0x159b)
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:27:15.573  Found net devices under 0000:af:00.0: cvl_0_0
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:27:15.573  Found net devices under 0000:af:00.1: cvl_0_1
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:27:15.573  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:27:15.573  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms
00:27:15.573  
00:27:15.573  --- 10.0.0.2 ping statistics ---
00:27:15.573  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:15.573  rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:27:15.573  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:27:15.573  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms
00:27:15.573  
00:27:15.573  --- 10.0.0.1 ping statistics ---
00:27:15.573  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:15.573  rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3171815
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3171815
00:27:15.573   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3171815 ']'
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:15.574  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:27:15.574  [2024-12-10 00:08:30.533314] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:27:15.574  [2024-12-10 00:08:30.533363] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:15.574  [2024-12-10 00:08:30.606617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:27:15.574  [2024-12-10 00:08:30.662726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:27:15.574  [2024-12-10 00:08:30.662769] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:27:15.574  [2024-12-10 00:08:30.662781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:27:15.574  [2024-12-10 00:08:30.662805] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:27:15.574  [2024-12-10 00:08:30.662814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:27:15.574  [2024-12-10 00:08:30.664794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:27:15.574  [2024-12-10 00:08:30.664906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:27:15.574  [2024-12-10 00:08:30.665013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:27:15.574  [2024-12-10 00:08:30.665015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:27:15.574  [2024-12-10 00:08:30.783010] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:27:15.574  Malloc0
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:27:15.574  [2024-12-10 00:08:30.877846] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:27:15.574  [
00:27:15.574  {
00:27:15.574  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:27:15.574  "subtype": "Discovery",
00:27:15.574  "listen_addresses": [
00:27:15.574  {
00:27:15.574  "trtype": "TCP",
00:27:15.574  "adrfam": "IPv4",
00:27:15.574  "traddr": "10.0.0.2",
00:27:15.574  "trsvcid": "4420"
00:27:15.574  }
00:27:15.574  ],
00:27:15.574  "allow_any_host": true,
00:27:15.574  "hosts": []
00:27:15.574  },
00:27:15.574  {
00:27:15.574  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:27:15.574  "subtype": "NVMe",
00:27:15.574  "listen_addresses": [
00:27:15.574  {
00:27:15.574  "trtype": "TCP",
00:27:15.574  "adrfam": "IPv4",
00:27:15.574  "traddr": "10.0.0.2",
00:27:15.574  "trsvcid": "4420"
00:27:15.574  }
00:27:15.574  ],
00:27:15.574  "allow_any_host": true,
00:27:15.574  "hosts": [],
00:27:15.574  "serial_number": "SPDK00000000000001",
00:27:15.574  "model_number": "SPDK bdev Controller",
00:27:15.574  "max_namespaces": 32,
00:27:15.574  "min_cntlid": 1,
00:27:15.574  "max_cntlid": 65519,
00:27:15.574  "namespaces": [
00:27:15.574  {
00:27:15.574  "nsid": 1,
00:27:15.574  "bdev_name": "Malloc0",
00:27:15.574  "name": "Malloc0",
00:27:15.574  "nguid": "ABCDEF0123456789ABCDEF0123456789",
00:27:15.574  "eui64": "ABCDEF0123456789",
00:27:15.574  "uuid": "63ff956c-44e1-4034-91bf-61ffd742ecd1"
00:27:15.574  }
00:27:15.574  ]
00:27:15.574  }
00:27:15.574  ]
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.574   00:08:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all
00:27:15.574  [2024-12-10 00:08:30.932206] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:27:15.574  [2024-12-10 00:08:30.932253] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171944 ]
00:27:15.574  [2024-12-10 00:08:30.970140] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout)
00:27:15.574  [2024-12-10 00:08:30.974186] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2
00:27:15.574  [2024-12-10 00:08:30.974193] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420
00:27:15.574  [2024-12-10 00:08:30.974203] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null)
00:27:15.574  [2024-12-10 00:08:30.974213] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix
00:27:15.574  [2024-12-10 00:08:30.974718] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout)
00:27:15.574  [2024-12-10 00:08:30.974749] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1105690 0
00:27:15.574  [2024-12-10 00:08:30.985173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1
00:27:15.574  [2024-12-10 00:08:30.985187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1
00:27:15.574  [2024-12-10 00:08:30.985191] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0
00:27:15.574  [2024-12-10 00:08:30.985194] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0
00:27:15.574  [2024-12-10 00:08:30.985227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.574  [2024-12-10 00:08:30.985232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.574  [2024-12-10 00:08:30.985235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1105690)
00:27:15.574  [2024-12-10 00:08:30.985247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400
00:27:15.575  [2024-12-10 00:08:30.985264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167100, cid 0, qid 0
00:27:15.575  [2024-12-10 00:08:30.993176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.575  [2024-12-10 00:08:30.993186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.575  [2024-12-10 00:08:30.993190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167100) on tqpair=0x1105690
00:27:15.575  [2024-12-10 00:08:30.993206] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001
00:27:15.575  [2024-12-10 00:08:30.993213] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout)
00:27:15.575  [2024-12-10 00:08:30.993218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout)
00:27:15.575  [2024-12-10 00:08:30.993233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1105690)
00:27:15.575  [2024-12-10 00:08:30.993247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.575  [2024-12-10 00:08:30.993259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167100, cid 0, qid 0
00:27:15.575  [2024-12-10 00:08:30.993354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.575  [2024-12-10 00:08:30.993360] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.575  [2024-12-10 00:08:30.993363] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167100) on tqpair=0x1105690
00:27:15.575  [2024-12-10 00:08:30.993372] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout)
00:27:15.575  [2024-12-10 00:08:30.993378] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout)
00:27:15.575  [2024-12-10 00:08:30.993384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993387] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1105690)
00:27:15.575  [2024-12-10 00:08:30.993396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.575  [2024-12-10 00:08:30.993407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167100, cid 0, qid 0
00:27:15.575  [2024-12-10 00:08:30.993471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.575  [2024-12-10 00:08:30.993477] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.575  [2024-12-10 00:08:30.993481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167100) on tqpair=0x1105690
00:27:15.575  [2024-12-10 00:08:30.993488] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout)
00:27:15.575  [2024-12-10 00:08:30.993495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms)
00:27:15.575  [2024-12-10 00:08:30.993501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1105690)
00:27:15.575  [2024-12-10 00:08:30.993514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.575  [2024-12-10 00:08:30.993524] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167100, cid 0, qid 0
00:27:15.575  [2024-12-10 00:08:30.993588] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.575  [2024-12-10 00:08:30.993594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.575  [2024-12-10 00:08:30.993597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167100) on tqpair=0x1105690
00:27:15.575  [2024-12-10 00:08:30.993604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:27:15.575  [2024-12-10 00:08:30.993613] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993620] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1105690)
00:27:15.575  [2024-12-10 00:08:30.993628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.575  [2024-12-10 00:08:30.993638] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167100, cid 0, qid 0
00:27:15.575  [2024-12-10 00:08:30.993702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.575  [2024-12-10 00:08:30.993707] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.575  [2024-12-10 00:08:30.993710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167100) on tqpair=0x1105690
00:27:15.575  [2024-12-10 00:08:30.993718] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0
00:27:15.575  [2024-12-10 00:08:30.993723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms)
00:27:15.575  [2024-12-10 00:08:30.993729] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:27:15.575  [2024-12-10 00:08:30.993837] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1
00:27:15.575  [2024-12-10 00:08:30.993841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:27:15.575  [2024-12-10 00:08:30.993849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1105690)
00:27:15.575  [2024-12-10 00:08:30.993861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.575  [2024-12-10 00:08:30.993871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167100, cid 0, qid 0
00:27:15.575  [2024-12-10 00:08:30.993930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.575  [2024-12-10 00:08:30.993936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.575  [2024-12-10 00:08:30.993939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167100) on tqpair=0x1105690
00:27:15.575  [2024-12-10 00:08:30.993946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:27:15.575  [2024-12-10 00:08:30.993954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.993961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1105690)
00:27:15.575  [2024-12-10 00:08:30.993966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.575  [2024-12-10 00:08:30.993975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167100, cid 0, qid 0
00:27:15.575  [2024-12-10 00:08:30.994042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.575  [2024-12-10 00:08:30.994048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.575  [2024-12-10 00:08:30.994051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.994054] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167100) on tqpair=0x1105690
00:27:15.575  [2024-12-10 00:08:30.994058] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:27:15.575  [2024-12-10 00:08:30.994062] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms)
00:27:15.575  [2024-12-10 00:08:30.994071] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout)
00:27:15.575  [2024-12-10 00:08:30.994078] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms)
00:27:15.575  [2024-12-10 00:08:30.994088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.575  [2024-12-10 00:08:30.994092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1105690)
00:27:15.576  [2024-12-10 00:08:30.994097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.576  [2024-12-10 00:08:30.994107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167100, cid 0, qid 0
00:27:15.576  [2024-12-10 00:08:30.994207] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:27:15.576  [2024-12-10 00:08:30.994214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:27:15.576  [2024-12-10 00:08:30.994217] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:30.994221] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1105690): datao=0, datal=4096, cccid=0
00:27:15.576  [2024-12-10 00:08:30.994225] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1167100) on tqpair(0x1105690): expected_datao=0, payload_size=4096
00:27:15.576  [2024-12-10 00:08:30.994229] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:30.994235] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:30.994239] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.576  [2024-12-10 00:08:31.036313] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.576  [2024-12-10 00:08:31.036316] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167100) on tqpair=0x1105690
00:27:15.576  [2024-12-10 00:08:31.036328] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295
00:27:15.576  [2024-12-10 00:08:31.036333] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072
00:27:15.576  [2024-12-10 00:08:31.036337] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001
00:27:15.576  [2024-12-10 00:08:31.036341] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16
00:27:15.576  [2024-12-10 00:08:31.036345] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1
00:27:15.576  [2024-12-10 00:08:31.036350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms)
00:27:15.576  [2024-12-10 00:08:31.036358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms)
00:27:15.576  [2024-12-10 00:08:31.036365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1105690)
00:27:15.576  [2024-12-10 00:08:31.036379] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0
00:27:15.576  [2024-12-10 00:08:31.036390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167100, cid 0, qid 0
00:27:15.576  [2024-12-10 00:08:31.036460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.576  [2024-12-10 00:08:31.036470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.576  [2024-12-10 00:08:31.036474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036477] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167100) on tqpair=0x1105690
00:27:15.576  [2024-12-10 00:08:31.036483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1105690)
00:27:15.576  [2024-12-10 00:08:31.036495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:15.576  [2024-12-10 00:08:31.036500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036504] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036507] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1105690)
00:27:15.576  [2024-12-10 00:08:31.036512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:15.576  [2024-12-10 00:08:31.036517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1105690)
00:27:15.576  [2024-12-10 00:08:31.036528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:15.576  [2024-12-10 00:08:31.036533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036539] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.576  [2024-12-10 00:08:31.036544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:15.576  [2024-12-10 00:08:31.036548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms)
00:27:15.576  [2024-12-10 00:08:31.036559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:27:15.576  [2024-12-10 00:08:31.036565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1105690)
00:27:15.576  [2024-12-10 00:08:31.036574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.576  [2024-12-10 00:08:31.036585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167100, cid 0, qid 0
00:27:15.576  [2024-12-10 00:08:31.036590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167280, cid 1, qid 0
00:27:15.576  [2024-12-10 00:08:31.036594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167400, cid 2, qid 0
00:27:15.576  [2024-12-10 00:08:31.036598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.576  [2024-12-10 00:08:31.036601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167700, cid 4, qid 0
00:27:15.576  [2024-12-10 00:08:31.036698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.576  [2024-12-10 00:08:31.036704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.576  [2024-12-10 00:08:31.036707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167700) on tqpair=0x1105690
00:27:15.576  [2024-12-10 00:08:31.036714] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us
00:27:15.576  [2024-12-10 00:08:31.036720] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout)
00:27:15.576  [2024-12-10 00:08:31.036730] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1105690)
00:27:15.576  [2024-12-10 00:08:31.036739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.576  [2024-12-10 00:08:31.036749] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167700, cid 4, qid 0
00:27:15.576  [2024-12-10 00:08:31.036817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:27:15.576  [2024-12-10 00:08:31.036822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:27:15.576  [2024-12-10 00:08:31.036825] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036829] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1105690): datao=0, datal=4096, cccid=4
00:27:15.576  [2024-12-10 00:08:31.036832] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1167700) on tqpair(0x1105690): expected_datao=0, payload_size=4096
00:27:15.576  [2024-12-10 00:08:31.036836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036851] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:27:15.576  [2024-12-10 00:08:31.036855] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.036892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.577  [2024-12-10 00:08:31.036898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.577  [2024-12-10 00:08:31.036901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.036904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167700) on tqpair=0x1105690
00:27:15.577  [2024-12-10 00:08:31.036914] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state
00:27:15.577  [2024-12-10 00:08:31.036935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.036939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1105690)
00:27:15.577  [2024-12-10 00:08:31.036944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.577  [2024-12-10 00:08:31.036950] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.036953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.036956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1105690)
00:27:15.577  [2024-12-10 00:08:31.036961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:15.577  [2024-12-10 00:08:31.036974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167700, cid 4, qid 0
00:27:15.577  [2024-12-10 00:08:31.036979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167880, cid 5, qid 0
00:27:15.577  [2024-12-10 00:08:31.037071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:27:15.577  [2024-12-10 00:08:31.037077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:27:15.577  [2024-12-10 00:08:31.037080] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.037083] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1105690): datao=0, datal=1024, cccid=4
00:27:15.577  [2024-12-10 00:08:31.037087] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1167700) on tqpair(0x1105690): expected_datao=0, payload_size=1024
00:27:15.577  [2024-12-10 00:08:31.037091] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.037096] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.037099] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.037106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.577  [2024-12-10 00:08:31.037111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.577  [2024-12-10 00:08:31.037114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.037118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167880) on tqpair=0x1105690
00:27:15.577  [2024-12-10 00:08:31.081174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.577  [2024-12-10 00:08:31.081184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.577  [2024-12-10 00:08:31.081187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.081191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167700) on tqpair=0x1105690
00:27:15.577  [2024-12-10 00:08:31.081202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.081206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1105690)
00:27:15.577  [2024-12-10 00:08:31.081212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.577  [2024-12-10 00:08:31.081228] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167700, cid 4, qid 0
00:27:15.577  [2024-12-10 00:08:31.081380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:27:15.577  [2024-12-10 00:08:31.081386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:27:15.577  [2024-12-10 00:08:31.081389] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.081392] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1105690): datao=0, datal=3072, cccid=4
00:27:15.577  [2024-12-10 00:08:31.081396] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1167700) on tqpair(0x1105690): expected_datao=0, payload_size=3072
00:27:15.577  [2024-12-10 00:08:31.081400] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.081415] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.081419] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.123305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.577  [2024-12-10 00:08:31.123314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.577  [2024-12-10 00:08:31.123317] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.123321] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167700) on tqpair=0x1105690
00:27:15.577  [2024-12-10 00:08:31.123330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.123333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1105690)
00:27:15.577  [2024-12-10 00:08:31.123340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.577  [2024-12-10 00:08:31.123354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167700, cid 4, qid 0
00:27:15.577  [2024-12-10 00:08:31.123421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:27:15.577  [2024-12-10 00:08:31.123426] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:27:15.577  [2024-12-10 00:08:31.123429] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.123432] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1105690): datao=0, datal=8, cccid=4
00:27:15.577  [2024-12-10 00:08:31.123436] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1167700) on tqpair(0x1105690): expected_datao=0, payload_size=8
00:27:15.577  [2024-12-10 00:08:31.123440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.123446] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.123449] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.169174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.577  [2024-12-10 00:08:31.169185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.577  [2024-12-10 00:08:31.169188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.577  [2024-12-10 00:08:31.169191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167700) on tqpair=0x1105690
00:27:15.577  =====================================================
00:27:15.577  NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery
00:27:15.577  =====================================================
00:27:15.577  Controller Capabilities/Features
00:27:15.577  ================================
00:27:15.577  Vendor ID:                             0000
00:27:15.577  Subsystem Vendor ID:                   0000
00:27:15.577  Serial Number:                         ....................
00:27:15.577  Model Number:                          ........................................
00:27:15.577  Firmware Version:                      25.01
00:27:15.577  Recommended Arb Burst:                 0
00:27:15.577  IEEE OUI Identifier:                   00 00 00
00:27:15.577  Multi-path I/O
00:27:15.577    May have multiple subsystem ports:   No
00:27:15.577    May have multiple controllers:       No
00:27:15.577    Associated with SR-IOV VF:           No
00:27:15.577  Max Data Transfer Size:                131072
00:27:15.577  Max Number of Namespaces:              0
00:27:15.577  Max Number of I/O Queues:              1024
00:27:15.577  NVMe Specification Version (VS):       1.3
00:27:15.577  NVMe Specification Version (Identify): 1.3
00:27:15.577  Maximum Queue Entries:                 128
00:27:15.577  Contiguous Queues Required:            Yes
00:27:15.577  Arbitration Mechanisms Supported
00:27:15.577    Weighted Round Robin:                Not Supported
00:27:15.577    Vendor Specific:                     Not Supported
00:27:15.577  Reset Timeout:                         15000 ms
00:27:15.577  Doorbell Stride:                       4 bytes
00:27:15.577  NVM Subsystem Reset:                   Not Supported
00:27:15.577  Command Sets Supported
00:27:15.577    NVM Command Set:                     Supported
00:27:15.577  Boot Partition:                        Not Supported
00:27:15.577  Memory Page Size Minimum:              4096 bytes
00:27:15.577  Memory Page Size Maximum:              4096 bytes
00:27:15.577  Persistent Memory Region:              Not Supported
00:27:15.577  Optional Asynchronous Events Supported
00:27:15.577    Namespace Attribute Notices:         Not Supported
00:27:15.577    Firmware Activation Notices:         Not Supported
00:27:15.577    ANA Change Notices:                  Not Supported
00:27:15.577    PLE Aggregate Log Change Notices:    Not Supported
00:27:15.577    LBA Status Info Alert Notices:       Not Supported
00:27:15.578    EGE Aggregate Log Change Notices:    Not Supported
00:27:15.578    Normal NVM Subsystem Shutdown event: Not Supported
00:27:15.578    Zone Descriptor Change Notices:      Not Supported
00:27:15.578    Discovery Log Change Notices:        Supported
00:27:15.578  Controller Attributes
00:27:15.578    128-bit Host Identifier:             Not Supported
00:27:15.578    Non-Operational Permissive Mode:     Not Supported
00:27:15.578    NVM Sets:                            Not Supported
00:27:15.578    Read Recovery Levels:                Not Supported
00:27:15.578    Endurance Groups:                    Not Supported
00:27:15.578    Predictable Latency Mode:            Not Supported
00:27:15.578    Traffic Based Keep ALive:            Not Supported
00:27:15.578    Namespace Granularity:               Not Supported
00:27:15.578    SQ Associations:                     Not Supported
00:27:15.578    UUID List:                           Not Supported
00:27:15.578    Multi-Domain Subsystem:              Not Supported
00:27:15.578    Fixed Capacity Management:           Not Supported
00:27:15.578    Variable Capacity Management:        Not Supported
00:27:15.578    Delete Endurance Group:              Not Supported
00:27:15.578    Delete NVM Set:                      Not Supported
00:27:15.578    Extended LBA Formats Supported:      Not Supported
00:27:15.578    Flexible Data Placement Supported:   Not Supported
00:27:15.578  
00:27:15.578  Controller Memory Buffer Support
00:27:15.578  ================================
00:27:15.578  Supported:                             No
00:27:15.578  
00:27:15.578  Persistent Memory Region Support
00:27:15.578  ================================
00:27:15.578  Supported:                             No
00:27:15.578  
00:27:15.578  Admin Command Set Attributes
00:27:15.578  ============================
00:27:15.578  Security Send/Receive:                 Not Supported
00:27:15.578  Format NVM:                            Not Supported
00:27:15.578  Firmware Activate/Download:            Not Supported
00:27:15.578  Namespace Management:                  Not Supported
00:27:15.578  Device Self-Test:                      Not Supported
00:27:15.578  Directives:                            Not Supported
00:27:15.578  NVMe-MI:                               Not Supported
00:27:15.578  Virtualization Management:             Not Supported
00:27:15.578  Doorbell Buffer Config:                Not Supported
00:27:15.578  Get LBA Status Capability:             Not Supported
00:27:15.578  Command & Feature Lockdown Capability: Not Supported
00:27:15.578  Abort Command Limit:                   1
00:27:15.578  Async Event Request Limit:             4
00:27:15.578  Number of Firmware Slots:              N/A
00:27:15.578  Firmware Slot 1 Read-Only:             N/A
00:27:15.578  Firmware Activation Without Reset:     N/A
00:27:15.578  Multiple Update Detection Support:     N/A
00:27:15.578  Firmware Update Granularity:           No Information Provided
00:27:15.578  Per-Namespace SMART Log:               No
00:27:15.578  Asymmetric Namespace Access Log Page:  Not Supported
00:27:15.578  Subsystem NQN:                         nqn.2014-08.org.nvmexpress.discovery
00:27:15.578  Command Effects Log Page:              Not Supported
00:27:15.578  Get Log Page Extended Data:            Supported
00:27:15.578  Telemetry Log Pages:                   Not Supported
00:27:15.578  Persistent Event Log Pages:            Not Supported
00:27:15.578  Supported Log Pages Log Page:          May Support
00:27:15.578  Commands Supported & Effects Log Page: Not Supported
00:27:15.578  Feature Identifiers & Effects Log Page:May Support
00:27:15.578  NVMe-MI Commands & Effects Log Page:   May Support
00:27:15.578  Data Area 4 for Telemetry Log:         Not Supported
00:27:15.578  Error Log Page Entries Supported:      128
00:27:15.578  Keep Alive:                            Not Supported
00:27:15.578  
00:27:15.578  NVM Command Set Attributes
00:27:15.578  ==========================
00:27:15.578  Submission Queue Entry Size
00:27:15.578    Max:                       1
00:27:15.578    Min:                       1
00:27:15.578  Completion Queue Entry Size
00:27:15.578    Max:                       1
00:27:15.578    Min:                       1
00:27:15.578  Number of Namespaces:        0
00:27:15.578  Compare Command:             Not Supported
00:27:15.578  Write Uncorrectable Command: Not Supported
00:27:15.578  Dataset Management Command:  Not Supported
00:27:15.578  Write Zeroes Command:        Not Supported
00:27:15.578  Set Features Save Field:     Not Supported
00:27:15.578  Reservations:                Not Supported
00:27:15.578  Timestamp:                   Not Supported
00:27:15.578  Copy:                        Not Supported
00:27:15.578  Volatile Write Cache:        Not Present
00:27:15.578  Atomic Write Unit (Normal):  1
00:27:15.578  Atomic Write Unit (PFail):   1
00:27:15.578  Atomic Compare & Write Unit: 1
00:27:15.578  Fused Compare & Write:       Supported
00:27:15.578  Scatter-Gather List
00:27:15.578    SGL Command Set:           Supported
00:27:15.578    SGL Keyed:                 Supported
00:27:15.578    SGL Bit Bucket Descriptor: Not Supported
00:27:15.578    SGL Metadata Pointer:      Not Supported
00:27:15.578    Oversized SGL:             Not Supported
00:27:15.578    SGL Metadata Address:      Not Supported
00:27:15.578    SGL Offset:                Supported
00:27:15.578    Transport SGL Data Block:  Not Supported
00:27:15.578  Replay Protected Memory Block:  Not Supported
00:27:15.578  
00:27:15.578  Firmware Slot Information
00:27:15.578  =========================
00:27:15.578  Active slot:                 0
00:27:15.578  
00:27:15.578  
00:27:15.578  Error Log
00:27:15.578  =========
00:27:15.578  
00:27:15.578  Active Namespaces
00:27:15.578  =================
00:27:15.578  Discovery Log Page
00:27:15.578  ==================
00:27:15.578  Generation Counter:                    2
00:27:15.578  Number of Records:                     2
00:27:15.578  Record Format:                         0
00:27:15.578  
00:27:15.578  Discovery Log Entry 0
00:27:15.578  ----------------------
00:27:15.578  Transport Type:                        3 (TCP)
00:27:15.578  Address Family:                        1 (IPv4)
00:27:15.578  Subsystem Type:                        3 (Current Discovery Subsystem)
00:27:15.578  Entry Flags:
00:27:15.578    Duplicate Returned Information:			1
00:27:15.578    Explicit Persistent Connection Support for Discovery: 1
00:27:15.578  Transport Requirements:
00:27:15.578    Secure Channel:                      Not Required
00:27:15.578  Port ID:                               0 (0x0000)
00:27:15.578  Controller ID:                         65535 (0xffff)
00:27:15.578  Admin Max SQ Size:                     128
00:27:15.578  Transport Service Identifier:          4420                            
00:27:15.578  NVM Subsystem Qualified Name:          nqn.2014-08.org.nvmexpress.discovery
00:27:15.578  Transport Address:                     10.0.0.2                                                                                                                                                                                                                                                        
00:27:15.578  Discovery Log Entry 1
00:27:15.578  ----------------------
00:27:15.578  Transport Type:                        3 (TCP)
00:27:15.578  Address Family:                        1 (IPv4)
00:27:15.578  Subsystem Type:                        2 (NVM Subsystem)
00:27:15.578  Entry Flags:
00:27:15.578    Duplicate Returned Information:			0
00:27:15.578    Explicit Persistent Connection Support for Discovery: 0
00:27:15.578  Transport Requirements:
00:27:15.578    Secure Channel:                      Not Required
00:27:15.578  Port ID:                               0 (0x0000)
00:27:15.578  Controller ID:                         65535 (0xffff)
00:27:15.578  Admin Max SQ Size:                     128
00:27:15.578  Transport Service Identifier:          4420                            
00:27:15.578  NVM Subsystem Qualified Name:          nqn.2016-06.io.spdk:cnode1
00:27:15.579  Transport Address:                     10.0.0.2                              [2024-12-10 00:08:31.169269] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD
00:27:15.579  [2024-12-10 00:08:31.169279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167100) on tqpair=0x1105690
00:27:15.579  [2024-12-10 00:08:31.169285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:15.579  [2024-12-10 00:08:31.169290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167280) on tqpair=0x1105690
00:27:15.579  [2024-12-10 00:08:31.169294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:15.579  [2024-12-10 00:08:31.169298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167400) on tqpair=0x1105690
00:27:15.579  [2024-12-10 00:08:31.169302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:15.579  [2024-12-10 00:08:31.169306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.579  [2024-12-10 00:08:31.169310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:15.579  [2024-12-10 00:08:31.169318] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169321] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169324] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.579  [2024-12-10 00:08:31.169331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.579  [2024-12-10 00:08:31.169344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.579  [2024-12-10 00:08:31.169403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.579  [2024-12-10 00:08:31.169409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.579  [2024-12-10 00:08:31.169412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.579  [2024-12-10 00:08:31.169421] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169427] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.579  [2024-12-10 00:08:31.169433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.579  [2024-12-10 00:08:31.169445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.579  [2024-12-10 00:08:31.169519] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.579  [2024-12-10 00:08:31.169525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.579  [2024-12-10 00:08:31.169528] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169531] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.579  [2024-12-10 00:08:31.169535] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us
00:27:15.579  [2024-12-10 00:08:31.169539] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms
00:27:15.579  [2024-12-10 00:08:31.169547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.579  [2024-12-10 00:08:31.169561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.579  [2024-12-10 00:08:31.169570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.579  [2024-12-10 00:08:31.169628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.579  [2024-12-10 00:08:31.169634] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.579  [2024-12-10 00:08:31.169637] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.579  [2024-12-10 00:08:31.169648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169652] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169655] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.579  [2024-12-10 00:08:31.169660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.579  [2024-12-10 00:08:31.169669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.579  [2024-12-10 00:08:31.169732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.579  [2024-12-10 00:08:31.169737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.579  [2024-12-10 00:08:31.169740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.579  [2024-12-10 00:08:31.169751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.579  [2024-12-10 00:08:31.169763] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.579  [2024-12-10 00:08:31.169772] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.579  [2024-12-10 00:08:31.169832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.579  [2024-12-10 00:08:31.169838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.579  [2024-12-10 00:08:31.169840] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.579  [2024-12-10 00:08:31.169852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.579  [2024-12-10 00:08:31.169864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.579  [2024-12-10 00:08:31.169872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.579  [2024-12-10 00:08:31.169940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.579  [2024-12-10 00:08:31.169946] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.579  [2024-12-10 00:08:31.169949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169952] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.579  [2024-12-10 00:08:31.169960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.169966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.579  [2024-12-10 00:08:31.169973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.579  [2024-12-10 00:08:31.169983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.579  [2024-12-10 00:08:31.170051] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.579  [2024-12-10 00:08:31.170056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.579  [2024-12-10 00:08:31.170059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.170062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.579  [2024-12-10 00:08:31.170070] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.170073] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.170076] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.579  [2024-12-10 00:08:31.170082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.579  [2024-12-10 00:08:31.170092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.579  [2024-12-10 00:08:31.170156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.579  [2024-12-10 00:08:31.170161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.579  [2024-12-10 00:08:31.170164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.579  [2024-12-10 00:08:31.170174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.579  [2024-12-10 00:08:31.170183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.580  [2024-12-10 00:08:31.170196] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.580  [2024-12-10 00:08:31.170205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.580  [2024-12-10 00:08:31.170272] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.580  [2024-12-10 00:08:31.170277] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.580  [2024-12-10 00:08:31.170280] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.580  [2024-12-10 00:08:31.170291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170295] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.580  [2024-12-10 00:08:31.170303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.580  [2024-12-10 00:08:31.170312] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.580  [2024-12-10 00:08:31.170388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.580  [2024-12-10 00:08:31.170394] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.580  [2024-12-10 00:08:31.170397] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170400] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.580  [2024-12-10 00:08:31.170408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170411] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.580  [2024-12-10 00:08:31.170420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.580  [2024-12-10 00:08:31.170432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.580  [2024-12-10 00:08:31.170499] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.580  [2024-12-10 00:08:31.170504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.580  [2024-12-10 00:08:31.170507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.580  [2024-12-10 00:08:31.170518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.580  [2024-12-10 00:08:31.170530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.580  [2024-12-10 00:08:31.170539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.580  [2024-12-10 00:08:31.170602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.580  [2024-12-10 00:08:31.170607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.580  [2024-12-10 00:08:31.170610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.580  [2024-12-10 00:08:31.170622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.580  [2024-12-10 00:08:31.170634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.580  [2024-12-10 00:08:31.170643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.580  [2024-12-10 00:08:31.170703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.580  [2024-12-10 00:08:31.170708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.580  [2024-12-10 00:08:31.170711] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.580  [2024-12-10 00:08:31.170722] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.580  [2024-12-10 00:08:31.170734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.580  [2024-12-10 00:08:31.170744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.580  [2024-12-10 00:08:31.170803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.580  [2024-12-10 00:08:31.170809] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.580  [2024-12-10 00:08:31.170812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170815] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.580  [2024-12-10 00:08:31.170823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170829] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.580  [2024-12-10 00:08:31.170835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.580  [2024-12-10 00:08:31.170845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.580  [2024-12-10 00:08:31.170921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.580  [2024-12-10 00:08:31.170926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.580  [2024-12-10 00:08:31.170929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170932] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.580  [2024-12-10 00:08:31.170941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.170947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.580  [2024-12-10 00:08:31.170953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.580  [2024-12-10 00:08:31.170961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.580  [2024-12-10 00:08:31.171022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.580  [2024-12-10 00:08:31.171028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.580  [2024-12-10 00:08:31.171031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.171034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.580  [2024-12-10 00:08:31.171042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.171045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.171048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.580  [2024-12-10 00:08:31.171054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.580  [2024-12-10 00:08:31.171063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.580  [2024-12-10 00:08:31.171147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.580  [2024-12-10 00:08:31.171152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.580  [2024-12-10 00:08:31.171155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.171159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.580  [2024-12-10 00:08:31.171170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.171174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.580  [2024-12-10 00:08:31.171177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.580  [2024-12-10 00:08:31.171183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.580  [2024-12-10 00:08:31.171193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.580  [2024-12-10 00:08:31.171257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.580  [2024-12-10 00:08:31.171263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.581  [2024-12-10 00:08:31.171265] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171269] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.581  [2024-12-10 00:08:31.171276] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171280] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171283] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.581  [2024-12-10 00:08:31.171288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.581  [2024-12-10 00:08:31.171298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.581  [2024-12-10 00:08:31.171373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.581  [2024-12-10 00:08:31.171378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.581  [2024-12-10 00:08:31.171381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.581  [2024-12-10 00:08:31.171392] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171396] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.581  [2024-12-10 00:08:31.171404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.581  [2024-12-10 00:08:31.171413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.581  [2024-12-10 00:08:31.171473] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.581  [2024-12-10 00:08:31.171479] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.581  [2024-12-10 00:08:31.171481] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.581  [2024-12-10 00:08:31.171493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171500] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.581  [2024-12-10 00:08:31.171505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.581  [2024-12-10 00:08:31.171516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.581  [2024-12-10 00:08:31.171575] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.581  [2024-12-10 00:08:31.171580] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.581  [2024-12-10 00:08:31.171583] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171586] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.581  [2024-12-10 00:08:31.171594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.581  [2024-12-10 00:08:31.171606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.581  [2024-12-10 00:08:31.171615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.581  [2024-12-10 00:08:31.171677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.581  [2024-12-10 00:08:31.171683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.581  [2024-12-10 00:08:31.171686] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.581  [2024-12-10 00:08:31.171697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171700] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.581  [2024-12-10 00:08:31.171709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.581  [2024-12-10 00:08:31.171718] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.581  [2024-12-10 00:08:31.171778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.581  [2024-12-10 00:08:31.171785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.581  [2024-12-10 00:08:31.171789] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171792] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.581  [2024-12-10 00:08:31.171800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171806] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.581  [2024-12-10 00:08:31.171812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.581  [2024-12-10 00:08:31.171821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.581  [2024-12-10 00:08:31.171884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.581  [2024-12-10 00:08:31.171889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.581  [2024-12-10 00:08:31.171892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.581  [2024-12-10 00:08:31.171904] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171910] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.581  [2024-12-10 00:08:31.171916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.581  [2024-12-10 00:08:31.171925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.581  [2024-12-10 00:08:31.171988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.581  [2024-12-10 00:08:31.171993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.581  [2024-12-10 00:08:31.171996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.171999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.581  [2024-12-10 00:08:31.172007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.172011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.172014] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.581  [2024-12-10 00:08:31.172019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.581  [2024-12-10 00:08:31.172029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.581  [2024-12-10 00:08:31.172112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.581  [2024-12-10 00:08:31.172117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.581  [2024-12-10 00:08:31.172120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.172123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.581  [2024-12-10 00:08:31.172132] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.172135] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.172138] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.581  [2024-12-10 00:08:31.172144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.581  [2024-12-10 00:08:31.172152] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.581  [2024-12-10 00:08:31.172224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.581  [2024-12-10 00:08:31.172229] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.581  [2024-12-10 00:08:31.172234] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.172237] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.581  [2024-12-10 00:08:31.172245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.172249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.581  [2024-12-10 00:08:31.172252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.581  [2024-12-10 00:08:31.172257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.581  [2024-12-10 00:08:31.172266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.581  [2024-12-10 00:08:31.172331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.581  [2024-12-10 00:08:31.172336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.582  [2024-12-10 00:08:31.172339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.582  [2024-12-10 00:08:31.172350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.582  [2024-12-10 00:08:31.172362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.582  [2024-12-10 00:08:31.172372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.582  [2024-12-10 00:08:31.172429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.582  [2024-12-10 00:08:31.172435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.582  [2024-12-10 00:08:31.172438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.582  [2024-12-10 00:08:31.172449] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172452] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.582  [2024-12-10 00:08:31.172461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.582  [2024-12-10 00:08:31.172470] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.582  [2024-12-10 00:08:31.172548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.582  [2024-12-10 00:08:31.172553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.582  [2024-12-10 00:08:31.172556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.582  [2024-12-10 00:08:31.172567] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.582  [2024-12-10 00:08:31.172579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.582  [2024-12-10 00:08:31.172588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.582  [2024-12-10 00:08:31.172665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.582  [2024-12-10 00:08:31.172670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.582  [2024-12-10 00:08:31.172673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.582  [2024-12-10 00:08:31.172686] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172690] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.582  [2024-12-10 00:08:31.172698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.582  [2024-12-10 00:08:31.172707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.582  [2024-12-10 00:08:31.172769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.582  [2024-12-10 00:08:31.172775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.582  [2024-12-10 00:08:31.172777] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172781] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.582  [2024-12-10 00:08:31.172789] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172793] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.582  [2024-12-10 00:08:31.172801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.582  [2024-12-10 00:08:31.172811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.582  [2024-12-10 00:08:31.172870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.582  [2024-12-10 00:08:31.172875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.582  [2024-12-10 00:08:31.172878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.582  [2024-12-10 00:08:31.172889] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172896] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.582  [2024-12-10 00:08:31.172901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.582  [2024-12-10 00:08:31.172910] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.582  [2024-12-10 00:08:31.172971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.582  [2024-12-10 00:08:31.172977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.582  [2024-12-10 00:08:31.172980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.582  [2024-12-10 00:08:31.172991] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172994] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.172997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.582  [2024-12-10 00:08:31.173002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.582  [2024-12-10 00:08:31.173011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.582  [2024-12-10 00:08:31.173090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.582  [2024-12-10 00:08:31.173095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.582  [2024-12-10 00:08:31.173098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.173101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.582  [2024-12-10 00:08:31.173111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.173114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.173117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.582  [2024-12-10 00:08:31.173123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.582  [2024-12-10 00:08:31.173132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.582  [2024-12-10 00:08:31.177175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.582  [2024-12-10 00:08:31.177182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.582  [2024-12-10 00:08:31.177185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.177188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.582  [2024-12-10 00:08:31.177197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.177201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.177204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1105690)
00:27:15.582  [2024-12-10 00:08:31.177209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.582  [2024-12-10 00:08:31.177220] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1167580, cid 3, qid 0
00:27:15.582  [2024-12-10 00:08:31.177283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.582  [2024-12-10 00:08:31.177288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.582  [2024-12-10 00:08:31.177291] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.582  [2024-12-10 00:08:31.177295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1167580) on tqpair=0x1105690
00:27:15.582  [2024-12-10 00:08:31.177301] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds
00:27:15.582                                                                                                                                                                                                                            
00:27:15.582   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1' -L all
00:27:15.583  [2024-12-10 00:08:31.215037] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:27:15.583  [2024-12-10 00:08:31.215070] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3171948 ]
00:27:15.583  [2024-12-10 00:08:31.255347] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout)
00:27:15.583  [2024-12-10 00:08:31.255390] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2
00:27:15.583  [2024-12-10 00:08:31.255396] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420
00:27:15.583  [2024-12-10 00:08:31.255406] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null)
00:27:15.583  [2024-12-10 00:08:31.255415] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix
00:27:15.583  [2024-12-10 00:08:31.259311] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout)
00:27:15.583  [2024-12-10 00:08:31.259339] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x122f690 0
00:27:15.583  [2024-12-10 00:08:31.267174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1
00:27:15.583  [2024-12-10 00:08:31.267187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1
00:27:15.583  [2024-12-10 00:08:31.267194] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0
00:27:15.583  [2024-12-10 00:08:31.267197] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0
00:27:15.583  [2024-12-10 00:08:31.267223] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.267227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.267231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122f690)
00:27:15.583  [2024-12-10 00:08:31.267241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400
00:27:15.583  [2024-12-10 00:08:31.267258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291100, cid 0, qid 0
00:27:15.583  [2024-12-10 00:08:31.275174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.583  [2024-12-10 00:08:31.275182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.583  [2024-12-10 00:08:31.275185] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275189] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291100) on tqpair=0x122f690
00:27:15.583  [2024-12-10 00:08:31.275200] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001
00:27:15.583  [2024-12-10 00:08:31.275206] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout)
00:27:15.583  [2024-12-10 00:08:31.275211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout)
00:27:15.583  [2024-12-10 00:08:31.275220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122f690)
00:27:15.583  [2024-12-10 00:08:31.275234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.583  [2024-12-10 00:08:31.275247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291100, cid 0, qid 0
00:27:15.583  [2024-12-10 00:08:31.275335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.583  [2024-12-10 00:08:31.275341] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.583  [2024-12-10 00:08:31.275344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291100) on tqpair=0x122f690
00:27:15.583  [2024-12-10 00:08:31.275351] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout)
00:27:15.583  [2024-12-10 00:08:31.275358] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout)
00:27:15.583  [2024-12-10 00:08:31.275364] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275370] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122f690)
00:27:15.583  [2024-12-10 00:08:31.275376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.583  [2024-12-10 00:08:31.275386] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291100, cid 0, qid 0
00:27:15.583  [2024-12-10 00:08:31.275445] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.583  [2024-12-10 00:08:31.275451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.583  [2024-12-10 00:08:31.275454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291100) on tqpair=0x122f690
00:27:15.583  [2024-12-10 00:08:31.275461] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout)
00:27:15.583  [2024-12-10 00:08:31.275470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms)
00:27:15.583  [2024-12-10 00:08:31.275476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122f690)
00:27:15.583  [2024-12-10 00:08:31.275488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.583  [2024-12-10 00:08:31.275498] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291100, cid 0, qid 0
00:27:15.583  [2024-12-10 00:08:31.275560] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.583  [2024-12-10 00:08:31.275565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.583  [2024-12-10 00:08:31.275569] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291100) on tqpair=0x122f690
00:27:15.583  [2024-12-10 00:08:31.275576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms)
00:27:15.583  [2024-12-10 00:08:31.275584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275591] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122f690)
00:27:15.583  [2024-12-10 00:08:31.275596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.583  [2024-12-10 00:08:31.275605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291100, cid 0, qid 0
00:27:15.583  [2024-12-10 00:08:31.275664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.583  [2024-12-10 00:08:31.275669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.583  [2024-12-10 00:08:31.275672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291100) on tqpair=0x122f690
00:27:15.583  [2024-12-10 00:08:31.275679] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0
00:27:15.583  [2024-12-10 00:08:31.275683] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms)
00:27:15.583  [2024-12-10 00:08:31.275690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms)
00:27:15.583  [2024-12-10 00:08:31.275798] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1
00:27:15.583  [2024-12-10 00:08:31.275802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms)
00:27:15.583  [2024-12-10 00:08:31.275808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.583  [2024-12-10 00:08:31.275812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.275815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122f690)
00:27:15.584  [2024-12-10 00:08:31.275820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.584  [2024-12-10 00:08:31.275830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291100, cid 0, qid 0
00:27:15.584  [2024-12-10 00:08:31.275892] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.584  [2024-12-10 00:08:31.275898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.584  [2024-12-10 00:08:31.275901] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.275905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291100) on tqpair=0x122f690
00:27:15.584  [2024-12-10 00:08:31.275910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms)
00:27:15.584  [2024-12-10 00:08:31.275918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.275921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.275924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122f690)
00:27:15.584  [2024-12-10 00:08:31.275930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.584  [2024-12-10 00:08:31.275939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291100, cid 0, qid 0
00:27:15.584  [2024-12-10 00:08:31.276008] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.584  [2024-12-10 00:08:31.276013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.584  [2024-12-10 00:08:31.276016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291100) on tqpair=0x122f690
00:27:15.584  [2024-12-10 00:08:31.276023] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready
00:27:15.584  [2024-12-10 00:08:31.276027] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms)
00:27:15.584  [2024-12-10 00:08:31.276034] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout)
00:27:15.584  [2024-12-10 00:08:31.276040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms)
00:27:15.584  [2024-12-10 00:08:31.276051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122f690)
00:27:15.584  [2024-12-10 00:08:31.276060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.584  [2024-12-10 00:08:31.276071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291100, cid 0, qid 0
00:27:15.584  [2024-12-10 00:08:31.276171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:27:15.584  [2024-12-10 00:08:31.276178] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:27:15.584  [2024-12-10 00:08:31.276181] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276184] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122f690): datao=0, datal=4096, cccid=0
00:27:15.584  [2024-12-10 00:08:31.276188] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1291100) on tqpair(0x122f690): expected_datao=0, payload_size=4096
00:27:15.584  [2024-12-10 00:08:31.276192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276198] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276201] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.584  [2024-12-10 00:08:31.276216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.584  [2024-12-10 00:08:31.276219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291100) on tqpair=0x122f690
00:27:15.584  [2024-12-10 00:08:31.276229] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295
00:27:15.584  [2024-12-10 00:08:31.276233] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072
00:27:15.584  [2024-12-10 00:08:31.276237] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001
00:27:15.584  [2024-12-10 00:08:31.276243] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16
00:27:15.584  [2024-12-10 00:08:31.276246] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1
00:27:15.584  [2024-12-10 00:08:31.276251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms)
00:27:15.584  [2024-12-10 00:08:31.276258] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms)
00:27:15.584  [2024-12-10 00:08:31.276264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122f690)
00:27:15.584  [2024-12-10 00:08:31.276276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0
00:27:15.584  [2024-12-10 00:08:31.276287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291100, cid 0, qid 0
00:27:15.584  [2024-12-10 00:08:31.276352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.584  [2024-12-10 00:08:31.276358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.584  [2024-12-10 00:08:31.276361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291100) on tqpair=0x122f690
00:27:15.584  [2024-12-10 00:08:31.276370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x122f690)
00:27:15.584  [2024-12-10 00:08:31.276381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:15.584  [2024-12-10 00:08:31.276386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276389] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x122f690)
00:27:15.584  [2024-12-10 00:08:31.276397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:15.584  [2024-12-10 00:08:31.276402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x122f690)
00:27:15.584  [2024-12-10 00:08:31.276413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:15.584  [2024-12-10 00:08:31.276418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122f690)
00:27:15.584  [2024-12-10 00:08:31.276430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:15.584  [2024-12-10 00:08:31.276434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms)
00:27:15.584  [2024-12-10 00:08:31.276444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms)
00:27:15.584  [2024-12-10 00:08:31.276449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122f690)
00:27:15.584  [2024-12-10 00:08:31.276460] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.584  [2024-12-10 00:08:31.276471] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291100, cid 0, qid 0
00:27:15.584  [2024-12-10 00:08:31.276476] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291280, cid 1, qid 0
00:27:15.584  [2024-12-10 00:08:31.276480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291400, cid 2, qid 0
00:27:15.584  [2024-12-10 00:08:31.276484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291580, cid 3, qid 0
00:27:15.584  [2024-12-10 00:08:31.276488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291700, cid 4, qid 0
00:27:15.584  [2024-12-10 00:08:31.276585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.584  [2024-12-10 00:08:31.276591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.584  [2024-12-10 00:08:31.276594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.584  [2024-12-10 00:08:31.276597] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291700) on tqpair=0x122f690
00:27:15.585  [2024-12-10 00:08:31.276602] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us
00:27:15.585  [2024-12-10 00:08:31.276606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.276615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.276621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.276626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.276629] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.276632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122f690)
00:27:15.585  [2024-12-10 00:08:31.276637] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:27:15.585  [2024-12-10 00:08:31.276647] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291700, cid 4, qid 0
00:27:15.585  [2024-12-10 00:08:31.276709] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.585  [2024-12-10 00:08:31.276715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.585  [2024-12-10 00:08:31.276718] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.276721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291700) on tqpair=0x122f690
00:27:15.585  [2024-12-10 00:08:31.276769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.276778] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.276785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.276788] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122f690)
00:27:15.585  [2024-12-10 00:08:31.276793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.585  [2024-12-10 00:08:31.276804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291700, cid 4, qid 0
00:27:15.585  [2024-12-10 00:08:31.276879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:27:15.585  [2024-12-10 00:08:31.276885] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:27:15.585  [2024-12-10 00:08:31.276888] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.276893] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122f690): datao=0, datal=4096, cccid=4
00:27:15.585  [2024-12-10 00:08:31.276897] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1291700) on tqpair(0x122f690): expected_datao=0, payload_size=4096
00:27:15.585  [2024-12-10 00:08:31.276901] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.276907] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.276910] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.276931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.585  [2024-12-10 00:08:31.276936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.585  [2024-12-10 00:08:31.276939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.276942] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291700) on tqpair=0x122f690
00:27:15.585  [2024-12-10 00:08:31.276952] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added
00:27:15.585  [2024-12-10 00:08:31.276962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.276971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.276977] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.276981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122f690)
00:27:15.585  [2024-12-10 00:08:31.276986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.585  [2024-12-10 00:08:31.276996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291700, cid 4, qid 0
00:27:15.585  [2024-12-10 00:08:31.277086] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:27:15.585  [2024-12-10 00:08:31.277092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:27:15.585  [2024-12-10 00:08:31.277095] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.277098] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122f690): datao=0, datal=4096, cccid=4
00:27:15.585  [2024-12-10 00:08:31.277102] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1291700) on tqpair(0x122f690): expected_datao=0, payload_size=4096
00:27:15.585  [2024-12-10 00:08:31.277105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.277111] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.277114] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.277123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.585  [2024-12-10 00:08:31.277128] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.585  [2024-12-10 00:08:31.277131] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.277135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291700) on tqpair=0x122f690
00:27:15.585  [2024-12-10 00:08:31.277142] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.277150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.277156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.277160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122f690)
00:27:15.585  [2024-12-10 00:08:31.277170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.585  [2024-12-10 00:08:31.277180] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291700, cid 4, qid 0
00:27:15.585  [2024-12-10 00:08:31.277253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:27:15.585  [2024-12-10 00:08:31.277259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:27:15.585  [2024-12-10 00:08:31.277262] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.277266] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122f690): datao=0, datal=4096, cccid=4
00:27:15.585  [2024-12-10 00:08:31.277269] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1291700) on tqpair(0x122f690): expected_datao=0, payload_size=4096
00:27:15.585  [2024-12-10 00:08:31.277273] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.277279] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.277282] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.277291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.585  [2024-12-10 00:08:31.277296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.585  [2024-12-10 00:08:31.277299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.277302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291700) on tqpair=0x122f690
00:27:15.585  [2024-12-10 00:08:31.277311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.277318] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.277325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.277330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.277335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.277339] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.277343] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID
00:27:15.585  [2024-12-10 00:08:31.277347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms)
00:27:15.585  [2024-12-10 00:08:31.277352] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout)
00:27:15.585  [2024-12-10 00:08:31.277364] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.585  [2024-12-10 00:08:31.277367] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122f690)
00:27:15.585  [2024-12-10 00:08:31.277372] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.586  [2024-12-10 00:08:31.277378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277382] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277385] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x122f690)
00:27:15.586  [2024-12-10 00:08:31.277389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 
00:27:15.586  [2024-12-10 00:08:31.277402] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291700, cid 4, qid 0
00:27:15.586  [2024-12-10 00:08:31.277407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291880, cid 5, qid 0
00:27:15.586  [2024-12-10 00:08:31.277482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.586  [2024-12-10 00:08:31.277488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.586  [2024-12-10 00:08:31.277492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291700) on tqpair=0x122f690
00:27:15.586  [2024-12-10 00:08:31.277501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.586  [2024-12-10 00:08:31.277506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.586  [2024-12-10 00:08:31.277509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291880) on tqpair=0x122f690
00:27:15.586  [2024-12-10 00:08:31.277520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x122f690)
00:27:15.586  [2024-12-10 00:08:31.277529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.586  [2024-12-10 00:08:31.277538] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291880, cid 5, qid 0
00:27:15.586  [2024-12-10 00:08:31.277609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.586  [2024-12-10 00:08:31.277615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.586  [2024-12-10 00:08:31.277618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291880) on tqpair=0x122f690
00:27:15.586  [2024-12-10 00:08:31.277628] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x122f690)
00:27:15.586  [2024-12-10 00:08:31.277637] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.586  [2024-12-10 00:08:31.277646] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291880, cid 5, qid 0
00:27:15.586  [2024-12-10 00:08:31.277719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.586  [2024-12-10 00:08:31.277724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.586  [2024-12-10 00:08:31.277727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291880) on tqpair=0x122f690
00:27:15.586  [2024-12-10 00:08:31.277740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277743] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x122f690)
00:27:15.586  [2024-12-10 00:08:31.277748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.586  [2024-12-10 00:08:31.277757] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291880, cid 5, qid 0
00:27:15.586  [2024-12-10 00:08:31.277821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.586  [2024-12-10 00:08:31.277827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.586  [2024-12-10 00:08:31.277830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291880) on tqpair=0x122f690
00:27:15.586  [2024-12-10 00:08:31.277845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277849] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x122f690)
00:27:15.586  [2024-12-10 00:08:31.277854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.586  [2024-12-10 00:08:31.277860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x122f690)
00:27:15.586  [2024-12-10 00:08:31.277870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.586  [2024-12-10 00:08:31.277877] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277880] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x122f690)
00:27:15.586  [2024-12-10 00:08:31.277885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.586  [2024-12-10 00:08:31.277891] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.277894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x122f690)
00:27:15.586  [2024-12-10 00:08:31.277900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.586  [2024-12-10 00:08:31.277911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291880, cid 5, qid 0
00:27:15.586  [2024-12-10 00:08:31.277915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291700, cid 4, qid 0
00:27:15.586  [2024-12-10 00:08:31.277919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291a00, cid 6, qid 0
00:27:15.586  [2024-12-10 00:08:31.277923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291b80, cid 7, qid 0
00:27:15.586  [2024-12-10 00:08:31.278073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:27:15.586  [2024-12-10 00:08:31.278079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:27:15.586  [2024-12-10 00:08:31.278082] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278085] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122f690): datao=0, datal=8192, cccid=5
00:27:15.586  [2024-12-10 00:08:31.278089] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1291880) on tqpair(0x122f690): expected_datao=0, payload_size=8192
00:27:15.586  [2024-12-10 00:08:31.278092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278104] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278108] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:27:15.586  [2024-12-10 00:08:31.278120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:27:15.586  [2024-12-10 00:08:31.278123] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278126] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122f690): datao=0, datal=512, cccid=4
00:27:15.586  [2024-12-10 00:08:31.278130] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1291700) on tqpair(0x122f690): expected_datao=0, payload_size=512
00:27:15.586  [2024-12-10 00:08:31.278134] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278139] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278142] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:27:15.586  [2024-12-10 00:08:31.278152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:27:15.586  [2024-12-10 00:08:31.278155] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278158] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122f690): datao=0, datal=512, cccid=6
00:27:15.586  [2024-12-10 00:08:31.278161] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1291a00) on tqpair(0x122f690): expected_datao=0, payload_size=512
00:27:15.586  [2024-12-10 00:08:31.278171] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278177] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278180] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278188] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7
00:27:15.586  [2024-12-10 00:08:31.278193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7
00:27:15.586  [2024-12-10 00:08:31.278196] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278199] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x122f690): datao=0, datal=4096, cccid=7
00:27:15.586  [2024-12-10 00:08:31.278203] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1291b80) on tqpair(0x122f690): expected_datao=0, payload_size=4096
00:27:15.586  [2024-12-10 00:08:31.278207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278212] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278215] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter
00:27:15.586  [2024-12-10 00:08:31.278222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.587  [2024-12-10 00:08:31.278227] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.587  [2024-12-10 00:08:31.278231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.587  [2024-12-10 00:08:31.278234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291880) on tqpair=0x122f690
00:27:15.587  [2024-12-10 00:08:31.278244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.587  [2024-12-10 00:08:31.278249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.587  [2024-12-10 00:08:31.278252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.587  [2024-12-10 00:08:31.278255] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291700) on tqpair=0x122f690
00:27:15.587  [2024-12-10 00:08:31.278263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.587  [2024-12-10 00:08:31.278268] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.587  [2024-12-10 00:08:31.278271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.587  [2024-12-10 00:08:31.278274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291a00) on tqpair=0x122f690
00:27:15.587  [2024-12-10 00:08:31.278280] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.587  [2024-12-10 00:08:31.278285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.587  [2024-12-10 00:08:31.278288] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.587  [2024-12-10 00:08:31.278291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291b80) on tqpair=0x122f690
00:27:15.587  =====================================================
00:27:15.587  NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:27:15.587  =====================================================
00:27:15.587  Controller Capabilities/Features
00:27:15.587  ================================
00:27:15.587  Vendor ID:                             8086
00:27:15.587  Subsystem Vendor ID:                   8086
00:27:15.587  Serial Number:                         SPDK00000000000001
00:27:15.587  Model Number:                          SPDK bdev Controller
00:27:15.587  Firmware Version:                      25.01
00:27:15.587  Recommended Arb Burst:                 6
00:27:15.587  IEEE OUI Identifier:                   e4 d2 5c
00:27:15.587  Multi-path I/O
00:27:15.587    May have multiple subsystem ports:   Yes
00:27:15.587    May have multiple controllers:       Yes
00:27:15.587    Associated with SR-IOV VF:           No
00:27:15.587  Max Data Transfer Size:                131072
00:27:15.587  Max Number of Namespaces:              32
00:27:15.587  Max Number of I/O Queues:              127
00:27:15.587  NVMe Specification Version (VS):       1.3
00:27:15.587  NVMe Specification Version (Identify): 1.3
00:27:15.587  Maximum Queue Entries:                 128
00:27:15.587  Contiguous Queues Required:            Yes
00:27:15.587  Arbitration Mechanisms Supported
00:27:15.587    Weighted Round Robin:                Not Supported
00:27:15.587    Vendor Specific:                     Not Supported
00:27:15.587  Reset Timeout:                         15000 ms
00:27:15.587  Doorbell Stride:                       4 bytes
00:27:15.587  NVM Subsystem Reset:                   Not Supported
00:27:15.587  Command Sets Supported
00:27:15.587    NVM Command Set:                     Supported
00:27:15.587  Boot Partition:                        Not Supported
00:27:15.587  Memory Page Size Minimum:              4096 bytes
00:27:15.587  Memory Page Size Maximum:              4096 bytes
00:27:15.587  Persistent Memory Region:              Not Supported
00:27:15.587  Optional Asynchronous Events Supported
00:27:15.587    Namespace Attribute Notices:         Supported
00:27:15.587    Firmware Activation Notices:         Not Supported
00:27:15.587    ANA Change Notices:                  Not Supported
00:27:15.587    PLE Aggregate Log Change Notices:    Not Supported
00:27:15.587    LBA Status Info Alert Notices:       Not Supported
00:27:15.587    EGE Aggregate Log Change Notices:    Not Supported
00:27:15.587    Normal NVM Subsystem Shutdown event: Not Supported
00:27:15.587    Zone Descriptor Change Notices:      Not Supported
00:27:15.587    Discovery Log Change Notices:        Not Supported
00:27:15.587  Controller Attributes
00:27:15.587    128-bit Host Identifier:             Supported
00:27:15.587    Non-Operational Permissive Mode:     Not Supported
00:27:15.587    NVM Sets:                            Not Supported
00:27:15.587    Read Recovery Levels:                Not Supported
00:27:15.587    Endurance Groups:                    Not Supported
00:27:15.587    Predictable Latency Mode:            Not Supported
00:27:15.587    Traffic Based Keep ALive:            Not Supported
00:27:15.587    Namespace Granularity:               Not Supported
00:27:15.587    SQ Associations:                     Not Supported
00:27:15.587    UUID List:                           Not Supported
00:27:15.587    Multi-Domain Subsystem:              Not Supported
00:27:15.587    Fixed Capacity Management:           Not Supported
00:27:15.587    Variable Capacity Management:        Not Supported
00:27:15.587    Delete Endurance Group:              Not Supported
00:27:15.587    Delete NVM Set:                      Not Supported
00:27:15.587    Extended LBA Formats Supported:      Not Supported
00:27:15.587    Flexible Data Placement Supported:   Not Supported
00:27:15.587  
00:27:15.587  Controller Memory Buffer Support
00:27:15.587  ================================
00:27:15.587  Supported:                             No
00:27:15.587  
00:27:15.587  Persistent Memory Region Support
00:27:15.587  ================================
00:27:15.587  Supported:                             No
00:27:15.587  
00:27:15.587  Admin Command Set Attributes
00:27:15.587  ============================
00:27:15.587  Security Send/Receive:                 Not Supported
00:27:15.587  Format NVM:                            Not Supported
00:27:15.587  Firmware Activate/Download:            Not Supported
00:27:15.587  Namespace Management:                  Not Supported
00:27:15.587  Device Self-Test:                      Not Supported
00:27:15.587  Directives:                            Not Supported
00:27:15.587  NVMe-MI:                               Not Supported
00:27:15.587  Virtualization Management:             Not Supported
00:27:15.587  Doorbell Buffer Config:                Not Supported
00:27:15.587  Get LBA Status Capability:             Not Supported
00:27:15.587  Command & Feature Lockdown Capability: Not Supported
00:27:15.587  Abort Command Limit:                   4
00:27:15.587  Async Event Request Limit:             4
00:27:15.587  Number of Firmware Slots:              N/A
00:27:15.587  Firmware Slot 1 Read-Only:             N/A
00:27:15.587  Firmware Activation Without Reset:     N/A
00:27:15.587  Multiple Update Detection Support:     N/A
00:27:15.587  Firmware Update Granularity:           No Information Provided
00:27:15.587  Per-Namespace SMART Log:               No
00:27:15.587  Asymmetric Namespace Access Log Page:  Not Supported
00:27:15.587  Subsystem NQN:                         nqn.2016-06.io.spdk:cnode1
00:27:15.587  Command Effects Log Page:              Supported
00:27:15.587  Get Log Page Extended Data:            Supported
00:27:15.587  Telemetry Log Pages:                   Not Supported
00:27:15.587  Persistent Event Log Pages:            Not Supported
00:27:15.587  Supported Log Pages Log Page:          May Support
00:27:15.587  Commands Supported & Effects Log Page: Not Supported
00:27:15.587  Feature Identifiers & Effects Log Page:May Support
00:27:15.587  NVMe-MI Commands & Effects Log Page:   May Support
00:27:15.587  Data Area 4 for Telemetry Log:         Not Supported
00:27:15.587  Error Log Page Entries Supported:      128
00:27:15.587  Keep Alive:                            Supported
00:27:15.587  Keep Alive Granularity:                10000 ms
00:27:15.587  
00:27:15.587  NVM Command Set Attributes
00:27:15.587  ==========================
00:27:15.587  Submission Queue Entry Size
00:27:15.587    Max:                       64
00:27:15.587    Min:                       64
00:27:15.587  Completion Queue Entry Size
00:27:15.587    Max:                       16
00:27:15.587    Min:                       16
00:27:15.587  Number of Namespaces:        32
00:27:15.587  Compare Command:             Supported
00:27:15.587  Write Uncorrectable Command: Not Supported
00:27:15.587  Dataset Management Command:  Supported
00:27:15.587  Write Zeroes Command:        Supported
00:27:15.587  Set Features Save Field:     Not Supported
00:27:15.587  Reservations:                Supported
00:27:15.587  Timestamp:                   Not Supported
00:27:15.587  Copy:                        Supported
00:27:15.587  Volatile Write Cache:        Present
00:27:15.587  Atomic Write Unit (Normal):  1
00:27:15.587  Atomic Write Unit (PFail):   1
00:27:15.587  Atomic Compare & Write Unit: 1
00:27:15.587  Fused Compare & Write:       Supported
00:27:15.587  Scatter-Gather List
00:27:15.587    SGL Command Set:           Supported
00:27:15.587    SGL Keyed:                 Supported
00:27:15.587    SGL Bit Bucket Descriptor: Not Supported
00:27:15.587    SGL Metadata Pointer:      Not Supported
00:27:15.587    Oversized SGL:             Not Supported
00:27:15.587    SGL Metadata Address:      Not Supported
00:27:15.587    SGL Offset:                Supported
00:27:15.587    Transport SGL Data Block:  Not Supported
00:27:15.587  Replay Protected Memory Block:  Not Supported
00:27:15.587  
00:27:15.587  Firmware Slot Information
00:27:15.587  =========================
00:27:15.587  Active slot:                 1
00:27:15.587  Slot 1 Firmware Revision:    25.01
00:27:15.587  
00:27:15.587  
00:27:15.587  Commands Supported and Effects
00:27:15.588  ==============================
00:27:15.588  Admin Commands
00:27:15.588  --------------
00:27:15.588                    Get Log Page (02h): Supported 
00:27:15.588                        Identify (06h): Supported 
00:27:15.588                           Abort (08h): Supported 
00:27:15.588                    Set Features (09h): Supported 
00:27:15.588                    Get Features (0Ah): Supported 
00:27:15.588      Asynchronous Event Request (0Ch): Supported 
00:27:15.588                      Keep Alive (18h): Supported 
00:27:15.588  I/O Commands
00:27:15.588  ------------
00:27:15.588                           Flush (00h): Supported LBA-Change 
00:27:15.588                           Write (01h): Supported LBA-Change 
00:27:15.588                            Read (02h): Supported 
00:27:15.588                         Compare (05h): Supported 
00:27:15.588                    Write Zeroes (08h): Supported LBA-Change 
00:27:15.588              Dataset Management (09h): Supported LBA-Change 
00:27:15.588                            Copy (19h): Supported LBA-Change 
00:27:15.588  
00:27:15.588  Error Log
00:27:15.588  =========
00:27:15.588  
00:27:15.588  Arbitration
00:27:15.588  ===========
00:27:15.588  Arbitration Burst:           1
00:27:15.588  
00:27:15.588  Power Management
00:27:15.588  ================
00:27:15.588  Number of Power States:          1
00:27:15.588  Current Power State:             Power State #0
00:27:15.588  Power State #0:
00:27:15.588    Max Power:                      0.00 W
00:27:15.588    Non-Operational State:         Operational
00:27:15.588    Entry Latency:                 Not Reported
00:27:15.588    Exit Latency:                  Not Reported
00:27:15.588    Relative Read Throughput:      0
00:27:15.588    Relative Read Latency:         0
00:27:15.588    Relative Write Throughput:     0
00:27:15.588    Relative Write Latency:        0
00:27:15.588    Idle Power:                     Not Reported
00:27:15.588    Active Power:                   Not Reported
00:27:15.588  Non-Operational Permissive Mode: Not Supported
00:27:15.588  
00:27:15.588  Health Information
00:27:15.588  ==================
00:27:15.588  Critical Warnings:
00:27:15.588    Available Spare Space:     OK
00:27:15.588    Temperature:               OK
00:27:15.588    Device Reliability:        OK
00:27:15.588    Read Only:                 No
00:27:15.588    Volatile Memory Backup:    OK
00:27:15.588  Current Temperature:         0 Kelvin (-273 Celsius)
00:27:15.588  Temperature Threshold:       0 Kelvin (-273 Celsius)
00:27:15.588  Available Spare:             0%
00:27:15.588  Available Spare Threshold:   0%
00:27:15.588  Life Percentage Used:[2024-12-10 00:08:31.278373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278378] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x122f690)
00:27:15.588  [2024-12-10 00:08:31.278383] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.588  [2024-12-10 00:08:31.278395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291b80, cid 7, qid 0
00:27:15.588  [2024-12-10 00:08:31.278470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.588  [2024-12-10 00:08:31.278476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.588  [2024-12-10 00:08:31.278479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291b80) on tqpair=0x122f690
00:27:15.588  [2024-12-10 00:08:31.278509] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD
00:27:15.588  [2024-12-10 00:08:31.278519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291100) on tqpair=0x122f690
00:27:15.588  [2024-12-10 00:08:31.278524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:15.588  [2024-12-10 00:08:31.278528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291280) on tqpair=0x122f690
00:27:15.588  [2024-12-10 00:08:31.278532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:15.588  [2024-12-10 00:08:31.278538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291400) on tqpair=0x122f690
00:27:15.588  [2024-12-10 00:08:31.278542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:15.588  [2024-12-10 00:08:31.278547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291580) on tqpair=0x122f690
00:27:15.588  [2024-12-10 00:08:31.278551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:27:15.588  [2024-12-10 00:08:31.278557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122f690)
00:27:15.588  [2024-12-10 00:08:31.278569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.588  [2024-12-10 00:08:31.278581] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291580, cid 3, qid 0
00:27:15.588  [2024-12-10 00:08:31.278640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.588  [2024-12-10 00:08:31.278646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.588  [2024-12-10 00:08:31.278649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278652] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291580) on tqpair=0x122f690
00:27:15.588  [2024-12-10 00:08:31.278657] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278661] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122f690)
00:27:15.588  [2024-12-10 00:08:31.278670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.588  [2024-12-10 00:08:31.278682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291580, cid 3, qid 0
00:27:15.588  [2024-12-10 00:08:31.278751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.588  [2024-12-10 00:08:31.278757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.588  [2024-12-10 00:08:31.278760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278763] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291580) on tqpair=0x122f690
00:27:15.588  [2024-12-10 00:08:31.278767] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us
00:27:15.588  [2024-12-10 00:08:31.278771] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms
00:27:15.588  [2024-12-10 00:08:31.278779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278782] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122f690)
00:27:15.588  [2024-12-10 00:08:31.278791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.588  [2024-12-10 00:08:31.278800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291580, cid 3, qid 0
00:27:15.588  [2024-12-10 00:08:31.278863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.588  [2024-12-10 00:08:31.278869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.588  [2024-12-10 00:08:31.278872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291580) on tqpair=0x122f690
00:27:15.588  [2024-12-10 00:08:31.278883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122f690)
00:27:15.588  [2024-12-10 00:08:31.278899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.588  [2024-12-10 00:08:31.278908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291580, cid 3, qid 0
00:27:15.588  [2024-12-10 00:08:31.278974] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.588  [2024-12-10 00:08:31.278980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.588  [2024-12-10 00:08:31.278983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.588  [2024-12-10 00:08:31.278986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291580) on tqpair=0x122f690
00:27:15.588  [2024-12-10 00:08:31.278994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.589  [2024-12-10 00:08:31.278997] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.589  [2024-12-10 00:08:31.279000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122f690)
00:27:15.589  [2024-12-10 00:08:31.279005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.589  [2024-12-10 00:08:31.279015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291580, cid 3, qid 0
00:27:15.589  [2024-12-10 00:08:31.279083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.589  [2024-12-10 00:08:31.279088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.589  [2024-12-10 00:08:31.279091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.589  [2024-12-10 00:08:31.279095] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291580) on tqpair=0x122f690
00:27:15.589  [2024-12-10 00:08:31.279103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.589  [2024-12-10 00:08:31.279106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.589  [2024-12-10 00:08:31.279109] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122f690)
00:27:15.589  [2024-12-10 00:08:31.279115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.589  [2024-12-10 00:08:31.279124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291580, cid 3, qid 0
00:27:15.589  [2024-12-10 00:08:31.283173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.589  [2024-12-10 00:08:31.283181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.589  [2024-12-10 00:08:31.283184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.589  [2024-12-10 00:08:31.283188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291580) on tqpair=0x122f690
00:27:15.589  [2024-12-10 00:08:31.283197] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter
00:27:15.589  [2024-12-10 00:08:31.283201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter
00:27:15.589  [2024-12-10 00:08:31.283204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x122f690)
00:27:15.589  [2024-12-10 00:08:31.283210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:27:15.589  [2024-12-10 00:08:31.283221] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1291580, cid 3, qid 0
00:27:15.589  [2024-12-10 00:08:31.283313] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5
00:27:15.589  [2024-12-10 00:08:31.283318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5
00:27:15.589  [2024-12-10 00:08:31.283321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter
00:27:15.589  [2024-12-10 00:08:31.283325] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1291580) on tqpair=0x122f690
00:27:15.589  [2024-12-10 00:08:31.283331] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds
00:27:15.589          0%
00:27:15.589  Data Units Read:             0
00:27:15.589  Data Units Written:          0
00:27:15.589  Host Read Commands:          0
00:27:15.589  Host Write Commands:         0
00:27:15.589  Controller Busy Time:        0 minutes
00:27:15.589  Power Cycles:                0
00:27:15.589  Power On Hours:              0 hours
00:27:15.589  Unsafe Shutdowns:            0
00:27:15.589  Unrecoverable Media Errors:  0
00:27:15.589  Lifetime Error Log Entries:  0
00:27:15.589  Warning Temperature Time:    0 minutes
00:27:15.589  Critical Temperature Time:   0 minutes
00:27:15.589  
00:27:15.589  Number of Queues
00:27:15.589  ================
00:27:15.589  Number of I/O Submission Queues:      127
00:27:15.589  Number of I/O Completion Queues:      127
00:27:15.589  
00:27:15.589  Active Namespaces
00:27:15.589  =================
00:27:15.589  Namespace ID:1
00:27:15.589  Error Recovery Timeout:                Unlimited
00:27:15.589  Command Set Identifier:                NVM (00h)
00:27:15.589  Deallocate:                            Supported
00:27:15.589  Deallocated/Unwritten Error:           Not Supported
00:27:15.589  Deallocated Read Value:                Unknown
00:27:15.589  Deallocate in Write Zeroes:            Not Supported
00:27:15.589  Deallocated Guard Field:               0xFFFF
00:27:15.589  Flush:                                 Supported
00:27:15.589  Reservation:                           Supported
00:27:15.589  Namespace Sharing Capabilities:        Multiple Controllers
00:27:15.589  Size (in LBAs):                        131072 (0GiB)
00:27:15.589  Capacity (in LBAs):                    131072 (0GiB)
00:27:15.589  Utilization (in LBAs):                 131072 (0GiB)
00:27:15.589  NGUID:                                 ABCDEF0123456789ABCDEF0123456789
00:27:15.589  EUI64:                                 ABCDEF0123456789
00:27:15.589  UUID:                                  63ff956c-44e1-4034-91bf-61ffd742ecd1
00:27:15.589  Thin Provisioning:                     Not Supported
00:27:15.589  Per-NS Atomic Units:                   Yes
00:27:15.589    Atomic Boundary Size (Normal):       0
00:27:15.589    Atomic Boundary Size (PFail):        0
00:27:15.589    Atomic Boundary Offset:              0
00:27:15.589  Maximum Single Source Range Length:    65535
00:27:15.589  Maximum Copy Length:                   65535
00:27:15.589  Maximum Source Range Count:            1
00:27:15.589  NGUID/EUI64 Never Reused:              No
00:27:15.589  Namespace Write Protected:             No
00:27:15.589  Number of LBA Formats:                 1
00:27:15.589  Current LBA Format:                    LBA Format #00
00:27:15.589  LBA Format #00: Data Size:   512  Metadata Size:     0
00:27:15.589  
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20}
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:27:15.589  rmmod nvme_tcp
00:27:15.589  rmmod nvme_fabrics
00:27:15.589  rmmod nvme_keyring
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3171815 ']'
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3171815
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3171815 ']'
00:27:15.589   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3171815
00:27:15.589    00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname
00:27:15.590   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:15.590    00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3171815
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3171815'
00:27:15.849  killing process with pid 3171815
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3171815
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3171815
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:15.849   00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:15.849    00:08:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:18.380   00:08:33 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:27:18.380  
00:27:18.380  real	0m9.277s
00:27:18.380  user	0m5.568s
00:27:18.380  sys	0m4.800s
00:27:18.380   00:08:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:18.380   00:08:33 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x
00:27:18.380  ************************************
00:27:18.380  END TEST nvmf_identify
00:27:18.380  ************************************
00:27:18.380   00:08:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp
00:27:18.380   00:08:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:27:18.380   00:08:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:18.380   00:08:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:27:18.380  ************************************
00:27:18.380  START TEST nvmf_perf
00:27:18.380  ************************************
00:27:18.380   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp
00:27:18.380  * Looking for test storage...
00:27:18.380  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:27:18.380     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version
00:27:18.380     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-:
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-:
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<'
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:18.380     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1
00:27:18.380     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1
00:27:18.380     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:18.380     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1
00:27:18.380     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2
00:27:18.380     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2
00:27:18.380     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:18.380     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:27:18.380  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:18.380  		--rc genhtml_branch_coverage=1
00:27:18.380  		--rc genhtml_function_coverage=1
00:27:18.380  		--rc genhtml_legend=1
00:27:18.380  		--rc geninfo_all_blocks=1
00:27:18.380  		--rc geninfo_unexecuted_blocks=1
00:27:18.380  		
00:27:18.380  		'
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:27:18.380  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:18.380  		--rc genhtml_branch_coverage=1
00:27:18.380  		--rc genhtml_function_coverage=1
00:27:18.380  		--rc genhtml_legend=1
00:27:18.380  		--rc geninfo_all_blocks=1
00:27:18.380  		--rc geninfo_unexecuted_blocks=1
00:27:18.380  		
00:27:18.380  		'
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:27:18.380  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:18.380  		--rc genhtml_branch_coverage=1
00:27:18.380  		--rc genhtml_function_coverage=1
00:27:18.380  		--rc genhtml_legend=1
00:27:18.380  		--rc geninfo_all_blocks=1
00:27:18.380  		--rc geninfo_unexecuted_blocks=1
00:27:18.380  		
00:27:18.380  		'
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:27:18.380  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:18.380  		--rc genhtml_branch_coverage=1
00:27:18.380  		--rc genhtml_function_coverage=1
00:27:18.380  		--rc genhtml_legend=1
00:27:18.380  		--rc geninfo_all_blocks=1
00:27:18.380  		--rc geninfo_unexecuted_blocks=1
00:27:18.380  		
00:27:18.380  		'
00:27:18.380   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:27:18.380     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:27:18.380    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:27:18.381     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:27:18.381     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob
00:27:18.381     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:18.381     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:18.381     00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:18.381      00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:18.381      00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:18.381      00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:18.381      00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH
00:27:18.381      00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:27:18.381  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:18.381    00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable
00:27:18.381   00:08:33 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=()
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=()
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=()
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=()
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=()
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=()
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=()
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx
00:27:24.948   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:27:24.949  Found 0000:af:00.0 (0x8086 - 0x159b)
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:27:24.949  Found 0000:af:00.1 (0x8086 - 0x159b)
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:27:24.949  Found net devices under 0000:af:00.0: cvl_0_0
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:27:24.949  Found net devices under 0000:af:00.1: cvl_0_1
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:27:24.949  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:27:24.949  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms
00:27:24.949  
00:27:24.949  --- 10.0.0.2 ping statistics ---
00:27:24.949  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:24.949  rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:27:24.949  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:27:24.949  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms
00:27:24.949  
00:27:24.949  --- 10.0.0.1 ping statistics ---
00:27:24.949  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:24.949  rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:27:24.949   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3175413
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3175413
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3175413 ']'
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:24.950  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:24.950   00:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:27:24.950  [2024-12-10 00:08:39.907971] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:27:24.950  [2024-12-10 00:08:39.908013] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:24.950  [2024-12-10 00:08:39.986528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:27:24.950  [2024-12-10 00:08:40.032324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:27:24.950  [2024-12-10 00:08:40.032363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:27:24.950  [2024-12-10 00:08:40.032370] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:27:24.950  [2024-12-10 00:08:40.032376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:27:24.950  [2024-12-10 00:08:40.032382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:27:24.950  [2024-12-10 00:08:40.033711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:27:24.950  [2024-12-10 00:08:40.033819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:27:24.950  [2024-12-10 00:08:40.033925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:27:24.950  [2024-12-10 00:08:40.033926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:27:24.950   00:08:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:24.950   00:08:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0
00:27:24.950   00:08:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:27:24.950   00:08:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:24.950   00:08:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:27:24.950   00:08:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:27:24.950   00:08:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh
00:27:24.950   00:08:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config
00:27:28.241    00:08:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev
00:27:28.241    00:08:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr'
00:27:28.241   00:08:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0
00:27:28.241    00:08:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:27:28.500   00:08:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0'
00:27:28.500   00:08:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']'
00:27:28.500   00:08:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1'
00:27:28.500   00:08:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']'
00:27:28.500   00:08:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o
00:27:28.758  [2024-12-10 00:08:44.398413] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:27:28.758   00:08:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:27:29.016   00:08:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs
00:27:29.016   00:08:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:27:29.016   00:08:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs
00:27:29.017   00:08:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1
00:27:29.275   00:08:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:27:29.533  [2024-12-10 00:08:45.209492] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:27:29.533   00:08:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:27:29.791   00:08:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']'
00:27:29.791   00:08:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0'
00:27:29.791   00:08:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']'
00:27:29.791   00:08:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0'
00:27:31.166  Initializing NVMe Controllers
00:27:31.166  Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54]
00:27:31.166  Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0
00:27:31.166  Initialization complete. Launching workers.
00:27:31.166  ========================================================
00:27:31.166                                                                             Latency(us)
00:27:31.166  Device Information                     :       IOPS      MiB/s    Average        min        max
00:27:31.166  PCIE (0000:5e:00.0) NSID 1 from core  0:   97723.71     381.73     326.95      29.41    4704.00
00:27:31.166  ========================================================
00:27:31.166  Total                                  :   97723.71     381.73     326.95      29.41    4704.00
00:27:31.166  
00:27:31.166   00:08:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:27:32.105  Initializing NVMe Controllers
00:27:32.105  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:27:32.105  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:27:32.105  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:27:32.105  Initialization complete. Launching workers.
00:27:32.105  ========================================================
00:27:32.105                                                                                                               Latency(us)
00:27:32.105  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:27:32.105  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:      40.00       0.16   25174.60     110.15   44806.49
00:27:32.105  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:      36.00       0.14   27909.77    7200.29   47889.70
00:27:32.105  ========================================================
00:27:32.105  Total                                                                    :      76.00       0.30   26470.21     110.15   47889.70
00:27:32.105  
00:27:32.105   00:08:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:27:33.480  Initializing NVMe Controllers
00:27:33.480  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:27:33.480  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:27:33.480  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:27:33.480  Initialization complete. Launching workers.
00:27:33.480  ========================================================
00:27:33.480                                                                                                               Latency(us)
00:27:33.480  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:27:33.480  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:   11304.00      44.16    2840.68     427.91    6322.03
00:27:33.480  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    3834.00      14.98    8381.42    5297.08   17899.99
00:27:33.480  ========================================================
00:27:33.480  Total                                                                    :   15138.00      59.13    4243.98     427.91   17899.99
00:27:33.480  
00:27:33.480   00:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]]
00:27:33.480   00:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]]
00:27:33.480   00:08:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:27:36.030  Initializing NVMe Controllers
00:27:36.030  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:27:36.030  Controller IO queue size 128, less than required.
00:27:36.030  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:27:36.030  Controller IO queue size 128, less than required.
00:27:36.030  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:27:36.030  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:27:36.030  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:27:36.030  Initialization complete. Launching workers.
00:27:36.030  ========================================================
00:27:36.030                                                                                                               Latency(us)
00:27:36.030  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:27:36.030  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    1834.98     458.75   71115.49   50993.66  118264.94
00:27:36.030  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:     592.88     148.22  220010.65   71656.05  326140.85
00:27:36.030  ========================================================
00:27:36.030  Total                                                                    :    2427.86     606.97  107475.39   50993.66  326140.85
00:27:36.030  
00:27:36.030   00:08:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4
00:27:36.289  No valid NVMe controllers or AIO or URING devices found
00:27:36.289  Initializing NVMe Controllers
00:27:36.289  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:27:36.289  Controller IO queue size 128, less than required.
00:27:36.289  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:27:36.289  WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test
00:27:36.289  Controller IO queue size 128, less than required.
00:27:36.289  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:27:36.289  WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test
00:27:36.289  WARNING: Some requested NVMe devices were skipped
00:27:36.289   00:08:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat
00:27:38.831  Initializing NVMe Controllers
00:27:38.831  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:27:38.831  Controller IO queue size 128, less than required.
00:27:38.831  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:27:38.831  Controller IO queue size 128, less than required.
00:27:38.831  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:27:38.831  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:27:38.831  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:27:38.831  Initialization complete. Launching workers.
00:27:38.831  
00:27:38.831  ====================
00:27:38.831  lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics:
00:27:38.831  TCP transport:
00:27:38.831  	polls:              11878
00:27:38.831  	idle_polls:         8195
00:27:38.831  	sock_completions:   3683
00:27:38.831  	nvme_completions:   6413
00:27:38.831  	submitted_requests: 9622
00:27:38.831  	queued_requests:    1
00:27:38.831  
00:27:38.831  ====================
00:27:38.831  lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics:
00:27:38.831  TCP transport:
00:27:38.831  	polls:              12035
00:27:38.831  	idle_polls:         8558
00:27:38.831  	sock_completions:   3477
00:27:38.831  	nvme_completions:   6479
00:27:38.831  	submitted_requests: 9740
00:27:38.831  	queued_requests:    1
00:27:38.831  ========================================================
00:27:38.831                                                                                                               Latency(us)
00:27:38.831  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:27:38.831  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    1602.91     400.73   81734.10   55400.90  141578.64
00:27:38.831  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:    1619.41     404.85   79926.58   42185.79  138524.28
00:27:38.831  ========================================================
00:27:38.831  Total                                                                    :    3222.31     805.58   80825.71   42185.79  141578.64
00:27:38.831  
00:27:38.831   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync
00:27:38.831   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']'
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20}
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:27:39.094  rmmod nvme_tcp
00:27:39.094  rmmod nvme_fabrics
00:27:39.094  rmmod nvme_keyring
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3175413 ']'
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3175413
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3175413 ']'
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3175413
00:27:39.094    00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname
00:27:39.094   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:39.094    00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3175413
00:27:39.353   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:27:39.353   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:27:39.353   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3175413'
00:27:39.353  killing process with pid 3175413
00:27:39.353   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3175413
00:27:39.353   00:08:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3175413
00:27:40.729   00:08:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:27:40.730   00:08:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:27:40.730   00:08:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:27:40.730   00:08:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr
00:27:40.730   00:08:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore
00:27:40.730   00:08:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save
00:27:40.730   00:08:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:27:40.730   00:08:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:27:40.730   00:08:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns
00:27:40.730   00:08:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:40.730   00:08:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:40.730    00:08:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:42.632   00:08:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:27:42.891  
00:27:42.891  real	0m24.756s
00:27:42.891  user	1m5.247s
00:27:42.891  sys	0m8.236s
00:27:42.891   00:08:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:42.891   00:08:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x
00:27:42.891  ************************************
00:27:42.891  END TEST nvmf_perf
00:27:42.891  ************************************
00:27:42.891   00:08:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp
00:27:42.891   00:08:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:27:42.891   00:08:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:42.891   00:08:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:27:42.891  ************************************
00:27:42.891  START TEST nvmf_fio_host
00:27:42.891  ************************************
00:27:42.891   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp
00:27:42.891  * Looking for test storage...
00:27:42.891  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:27:42.891    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:27:42.891     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version
00:27:42.891     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:27:42.891    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:27:42.891    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:42.891    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:42.891    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:42.891    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-:
00:27:42.891    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1
00:27:42.891    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-:
00:27:42.891    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2
00:27:42.891    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<'
00:27:42.891    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2
00:27:42.892    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1
00:27:42.892    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:42.892    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in
00:27:42.892    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1
00:27:42.892    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:42.892    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:42.892     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1
00:27:42.892     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1
00:27:42.892     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:42.892     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1
00:27:43.151     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2
00:27:43.151     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2
00:27:43.151     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:43.151     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:27:43.151  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:43.151  		--rc genhtml_branch_coverage=1
00:27:43.151  		--rc genhtml_function_coverage=1
00:27:43.151  		--rc genhtml_legend=1
00:27:43.151  		--rc geninfo_all_blocks=1
00:27:43.151  		--rc geninfo_unexecuted_blocks=1
00:27:43.151  		
00:27:43.151  		'
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:27:43.151  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:43.151  		--rc genhtml_branch_coverage=1
00:27:43.151  		--rc genhtml_function_coverage=1
00:27:43.151  		--rc genhtml_legend=1
00:27:43.151  		--rc geninfo_all_blocks=1
00:27:43.151  		--rc geninfo_unexecuted_blocks=1
00:27:43.151  		
00:27:43.151  		'
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:27:43.151  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:43.151  		--rc genhtml_branch_coverage=1
00:27:43.151  		--rc genhtml_function_coverage=1
00:27:43.151  		--rc genhtml_legend=1
00:27:43.151  		--rc geninfo_all_blocks=1
00:27:43.151  		--rc geninfo_unexecuted_blocks=1
00:27:43.151  		
00:27:43.151  		'
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:27:43.151  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:43.151  		--rc genhtml_branch_coverage=1
00:27:43.151  		--rc genhtml_function_coverage=1
00:27:43.151  		--rc genhtml_legend=1
00:27:43.151  		--rc geninfo_all_blocks=1
00:27:43.151  		--rc geninfo_unexecuted_blocks=1
00:27:43.151  		
00:27:43.151  		'
00:27:43.151   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:43.151    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:43.151     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:43.151     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:43.152     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:43.152     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH
00:27:43.152     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:27:43.152     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:27:43.152     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:27:43.152     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob
00:27:43.152     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:43.152     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:43.152     00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:43.152      00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:43.152      00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:43.152      00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:43.152      00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH
00:27:43.152      00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:27:43.152  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:43.152    00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable
00:27:43.152   00:08:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:27:49.717   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:27:49.717   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=()
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=()
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=()
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=()
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=()
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=()
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=()
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:27:49.718  Found 0000:af:00.0 (0x8086 - 0x159b)
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:27:49.718  Found 0000:af:00.1 (0x8086 - 0x159b)
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:27:49.718  Found net devices under 0000:af:00.0: cvl_0_0
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:27:49.718  Found net devices under 0000:af:00.1: cvl_0_1
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:27:49.718   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:27:49.719  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:27:49.719  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms
00:27:49.719  
00:27:49.719  --- 10.0.0.2 ping statistics ---
00:27:49.719  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:49.719  rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:27:49.719  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:27:49.719  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms
00:27:49.719  
00:27:49.719  --- 10.0.0.1 ping statistics ---
00:27:49.719  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:27:49.719  rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]]
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3181737
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3181737
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3181737 ']'
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:27:49.719  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable
00:27:49.719   00:09:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:27:49.719  [2024-12-10 00:09:04.811277] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:27:49.719  [2024-12-10 00:09:04.811327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:27:49.719  [2024-12-10 00:09:04.892407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:27:49.719  [2024-12-10 00:09:04.933745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:27:49.719  [2024-12-10 00:09:04.933780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:27:49.719  [2024-12-10 00:09:04.933787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:27:49.719  [2024-12-10 00:09:04.933793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:27:49.719  [2024-12-10 00:09:04.933798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:27:49.719  [2024-12-10 00:09:04.935282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:27:49.719  [2024-12-10 00:09:04.935389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:27:49.719  [2024-12-10 00:09:04.935405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:27:49.719  [2024-12-10 00:09:04.935407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:27:49.719   00:09:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:27:49.719   00:09:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0
00:27:49.719   00:09:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:27:49.719  [2024-12-10 00:09:05.209739] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:27:49.719   00:09:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt
00:27:49.719   00:09:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable
00:27:49.719   00:09:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:27:49.719   00:09:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
00:27:49.719  Malloc1
00:27:49.719   00:09:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:27:49.977   00:09:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:27:50.237   00:09:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:27:50.237  [2024-12-10 00:09:06.091791] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:27:50.496   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:27:50.496   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme
00:27:50.496   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:27:50.497   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:27:50.497   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:27:50.497   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:27:50.497   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers
00:27:50.497   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme
00:27:50.497   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift
00:27:50.497   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib=
00:27:50.497   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:27:50.497    00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme
00:27:50.497    00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan
00:27:50.497    00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:27:50.497   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:27:50.497   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:27:50.497   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:27:50.497    00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme
00:27:50.497    00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:27:50.497    00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:27:50.754   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:27:50.754   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:27:50.754   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme'
00:27:50.754   00:09:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096
00:27:51.012  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:27:51.012  fio-3.35
00:27:51.012  Starting 1 thread
00:27:53.683  
00:27:53.683  test: (groupid=0, jobs=1): err= 0: pid=3182256: Tue Dec 10 00:09:08 2024
00:27:53.683    read: IOPS=11.9k, BW=46.3MiB/s (48.6MB/s)(92.9MiB/2005msec)
00:27:53.683      slat (nsec): min=1532, max=251485, avg=1753.77, stdev=2256.82
00:27:53.683      clat (usec): min=3169, max=10352, avg=5962.31, stdev=438.07
00:27:53.683       lat (usec): min=3205, max=10353, avg=5964.07, stdev=437.95
00:27:53.683      clat percentiles (usec):
00:27:53.683       |  1.00th=[ 4948],  5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604],
00:27:53.683       | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6063],
00:27:53.683       | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6456], 95.00th=[ 6652],
00:27:53.683       | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 8455], 99.95th=[ 9765],
00:27:53.683       | 99.99th=[10290]
00:27:53.683     bw (  KiB/s): min=46128, max=48216, per=99.97%, avg=47432.00, stdev=920.17, samples=4
00:27:53.683     iops        : min=11532, max=12054, avg=11858.00, stdev=230.04, samples=4
00:27:53.683    write: IOPS=11.8k, BW=46.1MiB/s (48.4MB/s)(92.5MiB/2005msec); 0 zone resets
00:27:53.683      slat (nsec): min=1574, max=225551, avg=1822.04, stdev=1663.56
00:27:53.683      clat (usec): min=2422, max=9712, avg=4798.71, stdev=366.33
00:27:53.683       lat (usec): min=2438, max=9713, avg=4800.53, stdev=366.28
00:27:53.683      clat percentiles (usec):
00:27:53.683       |  1.00th=[ 3982],  5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490],
00:27:53.683       | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 4883],
00:27:53.683       | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342],
00:27:53.683       | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7242], 99.95th=[ 8586],
00:27:53.683       | 99.99th=[ 9503]
00:27:53.683     bw (  KiB/s): min=46664, max=47872, per=99.99%, avg=47228.00, stdev=504.30, samples=4
00:27:53.683     iops        : min=11666, max=11968, avg=11807.00, stdev=126.07, samples=4
00:27:53.683    lat (msec)   : 4=0.61%, 10=99.37%, 20=0.01%
00:27:53.683    cpu          : usr=72.01%, sys=26.85%, ctx=97, majf=0, minf=2
00:27:53.683    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:27:53.683       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:53.683       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:27:53.683       issued rwts: total=23782,23675,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:53.683       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:53.683  
00:27:53.683  Run status group 0 (all jobs):
00:27:53.683     READ: bw=46.3MiB/s (48.6MB/s), 46.3MiB/s-46.3MiB/s (48.6MB/s-48.6MB/s), io=92.9MiB (97.4MB), run=2005-2005msec
00:27:53.683    WRITE: bw=46.1MiB/s (48.4MB/s), 46.1MiB/s-46.1MiB/s (48.4MB/s-48.4MB/s), io=92.5MiB (97.0MB), run=2005-2005msec
00:27:53.683   00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1'
00:27:53.683   00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1'
00:27:53.683   00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:27:53.683   00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:27:53.683   00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers
00:27:53.683   00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme
00:27:53.683   00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift
00:27:53.683   00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib=
00:27:53.683   00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:27:53.684    00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme
00:27:53.684    00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan
00:27:53.684    00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:27:53.684   00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:27:53.684   00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:27:53.684   00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:27:53.684    00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme
00:27:53.684    00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:27:53.684    00:09:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:27:53.684   00:09:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=
00:27:53.684   00:09:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:27:53.684   00:09:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme'
00:27:53.684   00:09:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1'
00:27:53.684  test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128
00:27:53.684  fio-3.35
00:27:53.684  Starting 1 thread
00:27:56.216  
00:27:56.216  test: (groupid=0, jobs=1): err= 0: pid=3183062: Tue Dec 10 00:09:11 2024
00:27:56.216    read: IOPS=10.8k, BW=168MiB/s (176MB/s)(337MiB/2005msec)
00:27:56.216      slat (nsec): min=2475, max=96540, avg=2846.02, stdev=1452.18
00:27:56.216      clat (usec): min=1099, max=50096, avg=6953.79, stdev=3384.93
00:27:56.216       lat (usec): min=1103, max=50098, avg=6956.64, stdev=3385.01
00:27:56.216      clat percentiles (usec):
00:27:56.216       |  1.00th=[ 3621],  5.00th=[ 4293], 10.00th=[ 4686], 20.00th=[ 5276],
00:27:56.216       | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6652], 60.00th=[ 7177],
00:27:56.216       | 70.00th=[ 7570], 80.00th=[ 8094], 90.00th=[ 8717], 95.00th=[ 9503],
00:27:56.216       | 99.00th=[11469], 99.50th=[42730], 99.90th=[48497], 99.95th=[49546],
00:27:56.216       | 99.99th=[50070]
00:27:56.216     bw (  KiB/s): min=74112, max=94144, per=49.96%, avg=86072.00, stdev=8626.57, samples=4
00:27:56.216     iops        : min= 4632, max= 5884, avg=5379.50, stdev=539.16, samples=4
00:27:56.216    write: IOPS=6344, BW=99.1MiB/s (104MB/s)(177MiB/1783msec); 0 zone resets
00:27:56.216      slat (usec): min=28, max=382, avg=31.95, stdev= 7.79
00:27:56.216      clat (usec): min=4312, max=15262, avg=8560.22, stdev=1548.99
00:27:56.216       lat (usec): min=4344, max=15373, avg=8592.17, stdev=1550.82
00:27:56.216      clat percentiles (usec):
00:27:56.216       |  1.00th=[ 5604],  5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7242],
00:27:56.216       | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8717],
00:27:56.216       | 70.00th=[ 9241], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11469],
00:27:56.216       | 99.00th=[12649], 99.50th=[13173], 99.90th=[14746], 99.95th=[15139],
00:27:56.216       | 99.99th=[15270]
00:27:56.216     bw (  KiB/s): min=77824, max=98304, per=88.65%, avg=89992.00, stdev=8668.81, samples=4
00:27:56.216     iops        : min= 4864, max= 6144, avg=5624.50, stdev=541.80, samples=4
00:27:56.216    lat (msec)   : 2=0.01%, 4=1.54%, 10=89.95%, 20=8.11%, 50=0.38%
00:27:56.216    lat (msec)   : 100=0.01%
00:27:56.216    cpu          : usr=85.53%, sys=13.77%, ctx=46, majf=0, minf=2
00:27:56.216    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7%
00:27:56.216       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:27:56.216       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:27:56.216       issued rwts: total=21588,11313,0,0 short=0,0,0,0 dropped=0,0,0,0
00:27:56.216       latency   : target=0, window=0, percentile=100.00%, depth=128
00:27:56.216  
00:27:56.216  Run status group 0 (all jobs):
00:27:56.216     READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=337MiB (354MB), run=2005-2005msec
00:27:56.216    WRITE: bw=99.1MiB/s (104MB/s), 99.1MiB/s-99.1MiB/s (104MB/s-104MB/s), io=177MiB (185MB), run=1783-1783msec
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']'
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20}
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:27:56.216  rmmod nvme_tcp
00:27:56.216  rmmod nvme_fabrics
00:27:56.216  rmmod nvme_keyring
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3181737 ']'
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3181737
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3181737 ']'
00:27:56.216   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3181737
00:27:56.216    00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname
00:27:56.217   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:27:56.217    00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3181737
00:27:56.217   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:27:56.217   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:27:56.217   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3181737'
00:27:56.217  killing process with pid 3181737
00:27:56.217   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3181737
00:27:56.217   00:09:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3181737
00:27:56.486   00:09:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:27:56.486   00:09:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:27:56.486   00:09:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:27:56.486   00:09:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr
00:27:56.486   00:09:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save
00:27:56.486   00:09:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:27:56.486   00:09:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore
00:27:56.486   00:09:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:27:56.486   00:09:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns
00:27:56.486   00:09:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:56.486   00:09:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:56.486    00:09:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:58.401   00:09:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:27:58.401  
00:27:58.401  real	0m15.613s
00:27:58.401  user	0m45.910s
00:27:58.401  sys	0m6.475s
00:27:58.401   00:09:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable
00:27:58.401   00:09:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x
00:27:58.401  ************************************
00:27:58.401  END TEST nvmf_fio_host
00:27:58.401  ************************************
00:27:58.401   00:09:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp
00:27:58.401   00:09:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:27:58.401   00:09:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:27:58.401   00:09:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:27:58.661  ************************************
00:27:58.661  START TEST nvmf_failover
00:27:58.661  ************************************
00:27:58.661   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp
00:27:58.661  * Looking for test storage...
00:27:58.661  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:27:58.661     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version
00:27:58.661     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-:
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-:
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<'
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 ))
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:27:58.661     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1
00:27:58.661     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1
00:27:58.661     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:27:58.661     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1
00:27:58.661     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2
00:27:58.661     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2
00:27:58.661     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:27:58.661     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:27:58.661    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:27:58.661  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:58.662  		--rc genhtml_branch_coverage=1
00:27:58.662  		--rc genhtml_function_coverage=1
00:27:58.662  		--rc genhtml_legend=1
00:27:58.662  		--rc geninfo_all_blocks=1
00:27:58.662  		--rc geninfo_unexecuted_blocks=1
00:27:58.662  		
00:27:58.662  		'
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:27:58.662  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:58.662  		--rc genhtml_branch_coverage=1
00:27:58.662  		--rc genhtml_function_coverage=1
00:27:58.662  		--rc genhtml_legend=1
00:27:58.662  		--rc geninfo_all_blocks=1
00:27:58.662  		--rc geninfo_unexecuted_blocks=1
00:27:58.662  		
00:27:58.662  		'
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:27:58.662  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:58.662  		--rc genhtml_branch_coverage=1
00:27:58.662  		--rc genhtml_function_coverage=1
00:27:58.662  		--rc genhtml_legend=1
00:27:58.662  		--rc geninfo_all_blocks=1
00:27:58.662  		--rc geninfo_unexecuted_blocks=1
00:27:58.662  		
00:27:58.662  		'
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:27:58.662  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:27:58.662  		--rc genhtml_branch_coverage=1
00:27:58.662  		--rc genhtml_function_coverage=1
00:27:58.662  		--rc genhtml_legend=1
00:27:58.662  		--rc geninfo_all_blocks=1
00:27:58.662  		--rc geninfo_unexecuted_blocks=1
00:27:58.662  		
00:27:58.662  		'
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:27:58.662     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:27:58.662     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:27:58.662     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob
00:27:58.662     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:27:58.662     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:27:58.662     00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:27:58.662      00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:58.662      00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:58.662      00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:58.662      00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH
00:27:58.662      00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:27:58.662  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:27:58.662    00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable
00:27:58.662   00:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=()
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=()
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=()
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=()
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=()
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=()
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=()
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:28:05.227  Found 0000:af:00.0 (0x8086 - 0x159b)
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:28:05.227  Found 0000:af:00.1 (0x8086 - 0x159b)
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:28:05.227  Found net devices under 0000:af:00.0: cvl_0_0
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:28:05.227  Found net devices under 0000:af:00.1: cvl_0_1
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:28:05.227   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:28:05.228  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:28:05.228  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms
00:28:05.228  
00:28:05.228  --- 10.0.0.2 ping statistics ---
00:28:05.228  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:05.228  rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:28:05.228  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:28:05.228  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms
00:28:05.228  
00:28:05.228  --- 10.0.0.1 ping statistics ---
00:28:05.228  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:05.228  rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3186970
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3186970
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3186970 ']'
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:05.228  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:05.228   00:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:28:05.228  [2024-12-10 00:09:20.417306] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:28:05.228  [2024-12-10 00:09:20.417350] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:28:05.228  [2024-12-10 00:09:20.497360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:28:05.228  [2024-12-10 00:09:20.536504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:28:05.228  [2024-12-10 00:09:20.536542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:28:05.228  [2024-12-10 00:09:20.536549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:28:05.228  [2024-12-10 00:09:20.536554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:28:05.228  [2024-12-10 00:09:20.536560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:28:05.228  [2024-12-10 00:09:20.537930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:28:05.228  [2024-12-10 00:09:20.538037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:28:05.228  [2024-12-10 00:09:20.538038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:28:05.487   00:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:05.487   00:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0
00:28:05.487   00:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:28:05.487   00:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:05.487   00:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:28:05.487   00:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:28:05.487   00:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:28:05.745  [2024-12-10 00:09:21.477212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:28:05.745   00:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:28:06.004  Malloc0
00:28:06.004   00:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:28:06.262   00:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:28:06.262   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:28:06.520  [2024-12-10 00:09:22.278316] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:28:06.520   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:28:06.779  [2024-12-10 00:09:22.490924] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:28:06.779   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422
00:28:07.037  [2024-12-10 00:09:22.703627] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 ***
00:28:07.037   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3187436
00:28:07.037   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f
00:28:07.037   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:28:07.037   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3187436 /var/tmp/bdevperf.sock
00:28:07.037   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3187436 ']'
00:28:07.037   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:28:07.037   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:07.037   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:28:07.038  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:28:07.038   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:07.038   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:28:07.297   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:07.297   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0
00:28:07.297   00:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:28:07.555  NVMe0n1
00:28:07.555   00:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:28:08.120  
00:28:08.120   00:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3187526
00:28:08.121   00:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:28:08.121   00:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1
00:28:09.053   00:09:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:28:09.311   00:09:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3
00:28:12.593   00:09:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:28:12.593  
00:28:12.593   00:09:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:28:12.850   00:09:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3
00:28:16.132   00:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:28:16.132  [2024-12-10 00:09:31.749995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:28:16.132   00:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1
00:28:17.064   00:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422
00:28:17.321   00:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3187526
00:28:23.887  {
00:28:23.887    "results": [
00:28:23.887      {
00:28:23.887        "job": "NVMe0n1",
00:28:23.887        "core_mask": "0x1",
00:28:23.887        "workload": "verify",
00:28:23.887        "status": "finished",
00:28:23.887        "verify_range": {
00:28:23.887          "start": 0,
00:28:23.887          "length": 16384
00:28:23.887        },
00:28:23.887        "queue_depth": 128,
00:28:23.887        "io_size": 4096,
00:28:23.887        "runtime": 15.006761,
00:28:23.887        "iops": 11435.578936720589,
00:28:23.887        "mibps": 44.6702302215648,
00:28:23.887        "io_failed": 4245,
00:28:23.887        "io_timeout": 0,
00:28:23.887        "avg_latency_us": 10900.387659167023,
00:28:23.887        "min_latency_us": 440.807619047619,
00:28:23.887        "max_latency_us": 13232.030476190475
00:28:23.887      }
00:28:23.887    ],
00:28:23.887    "core_count": 1
00:28:23.887  }
00:28:23.888   00:09:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3187436
00:28:23.888   00:09:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3187436 ']'
00:28:23.888   00:09:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3187436
00:28:23.888    00:09:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname
00:28:23.888   00:09:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:23.888    00:09:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3187436
00:28:23.888   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:28:23.888   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:28:23.888   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3187436'
00:28:23.888  killing process with pid 3187436
00:28:23.888   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3187436
00:28:23.888   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3187436
00:28:23.888   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:28:23.888  [2024-12-10 00:09:22.775981] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:28:23.888  [2024-12-10 00:09:22.776031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3187436 ]
00:28:23.888  [2024-12-10 00:09:22.849221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:23.888  [2024-12-10 00:09:22.889042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:28:23.888  Running I/O for 15 seconds...
00:28:23.888      11426.00 IOPS,    44.63 MiB/s
[2024-12-09T23:09:39.745Z] [2024-12-10 00:09:24.971796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.888  [2024-12-10 00:09:24.971836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.971851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.888  [2024-12-10 00:09:24.971859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.971868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.888  [2024-12-10 00:09:24.971875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.971883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.888  [2024-12-10 00:09:24.971890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.971898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.888  [2024-12-10 00:09:24.971904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.971912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.888  [2024-12-10 00:09:24.971919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.971927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.888  [2024-12-10 00:09:24.971933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.971941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.888  [2024-12-10 00:09:24.971948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.971955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.888  [2024-12-10 00:09:24.971962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.971972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.888  [2024-12-10 00:09:24.971979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.971988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.888  [2024-12-10 00:09:24.971995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.888  [2024-12-10 00:09:24.972016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.888  [2024-12-10 00:09:24.972031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.888  [2024-12-10 00:09:24.972046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.888  [2024-12-10 00:09:24.972061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.888  [2024-12-10 00:09:24.972076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.888  [2024-12-10 00:09:24.972090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.888  [2024-12-10 00:09:24.972105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.888  [2024-12-10 00:09:24.972120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.888  [2024-12-10 00:09:24.972135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.888  [2024-12-10 00:09:24.972149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.888  [2024-12-10 00:09:24.972164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.888  [2024-12-10 00:09:24.972184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.888  [2024-12-10 00:09:24.972201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.888  [2024-12-10 00:09:24.972209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.888  [2024-12-10 00:09:24.972216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.889  [2024-12-10 00:09:24.972277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.889  [2024-12-10 00:09:24.972690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.889  [2024-12-10 00:09:24.972698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.972992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.972998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.973006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.973012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.973020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.973027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.973035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.973041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.973049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.973055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.973063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.973070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.973078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.973084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.973092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.973098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.973106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:100224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.973113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.973121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:100232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.973129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.973137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.973143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.973151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.890  [2024-12-10 00:09:24.973157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.890  [2024-12-10 00:09:24.973169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:100304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:100352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:100368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:100384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.891  [2024-12-10 00:09:24.973599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.891  [2024-12-10 00:09:24.973613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.891  [2024-12-10 00:09:24.973634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.891  [2024-12-10 00:09:24.973641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:24.973649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.892  [2024-12-10 00:09:24.973655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:24.973662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.892  [2024-12-10 00:09:24.973668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:24.973677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.892  [2024-12-10 00:09:24.973684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:24.973692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.892  [2024-12-10 00:09:24.973698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:24.973706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.892  [2024-12-10 00:09:24.973712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:24.973720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x94ce70 is same with the state(6) to be set
00:28:23.892  [2024-12-10 00:09:24.973728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:28:23.892  [2024-12-10 00:09:24.973733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:28:23.892  [2024-12-10 00:09:24.973739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100544 len:8 PRP1 0x0 PRP2 0x0
00:28:23.892  [2024-12-10 00:09:24.973746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:24.973790] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421
00:28:23.892  [2024-12-10 00:09:24.973813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:23.892  [2024-12-10 00:09:24.973820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:24.973827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:23.892  [2024-12-10 00:09:24.973834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:24.973840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:23.892  [2024-12-10 00:09:24.973847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:24.973854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:23.892  [2024-12-10 00:09:24.973860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:24.973866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state.
00:28:23.892  [2024-12-10 00:09:24.976662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:28:23.892  [2024-12-10 00:09:24.976687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9285d0 (9): Bad file descriptor
00:28:23.892  [2024-12-10 00:09:25.000416] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful.
00:28:23.892      11388.00 IOPS,    44.48 MiB/s
[2024-12-09T23:09:39.749Z]     11490.33 IOPS,    44.88 MiB/s
[2024-12-09T23:09:39.749Z]     11520.25 IOPS,    45.00 MiB/s
[2024-12-09T23:09:39.749Z] [2024-12-10 00:09:28.535923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:23.892  [2024-12-10 00:09:28.535964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.535978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:23.892  [2024-12-10 00:09:28.535985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.535992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:23.892  [2024-12-10 00:09:28.535999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.536006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:23.892  [2024-12-10 00:09:28.536012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.536018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9285d0 is same with the state(6) to be set
00:28:23.892  [2024-12-10 00:09:28.539291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.539322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.539339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.539354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.539368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.539382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.539397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.539412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.539426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.539444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.539459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.539475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.539490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.892  [2024-12-10 00:09:28.539505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.892  [2024-12-10 00:09:28.539511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:45880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.893  [2024-12-10 00:09:28.539908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.893  [2024-12-10 00:09:28.539916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.539923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.539931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.539937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.539945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.539951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.539960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:45936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.539966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.539974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.539980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.539988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:45952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.539995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.540011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.540025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.540041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.540055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.540070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.540084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.540099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:46016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.540113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.540128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.540142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:46040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.540157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:45096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:45104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:45120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:45128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:45136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:45152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:45168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:45200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.894  [2024-12-10 00:09:28.540382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.894  [2024-12-10 00:09:28.540390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.894  [2024-12-10 00:09:28.540396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:45224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:45256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:45280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:45296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:45344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:45376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:45384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:45400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:45408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:45432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:45448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.895  [2024-12-10 00:09:28.540846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.895  [2024-12-10 00:09:28.540860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.895  [2024-12-10 00:09:28.540868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.896  [2024-12-10 00:09:28.540875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.540883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.896  [2024-12-10 00:09:28.540889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.540897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:46080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.896  [2024-12-10 00:09:28.540903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.540911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.896  [2024-12-10 00:09:28.540924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.540933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.896  [2024-12-10 00:09:28.540940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.540948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:46104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.896  [2024-12-10 00:09:28.540954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.540962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:45456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.540968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.540977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:45464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.540984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.540992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.540998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.541013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.541027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:45496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.541042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:45504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.541056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.541070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.541085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:45528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.541098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:45536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.541113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.541131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.541145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:45560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.541161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:45568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:28.541178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x956f10 is same with the state(6) to be set
00:28:23.896  [2024-12-10 00:09:28.541193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:28:23.896  [2024-12-10 00:09:28.541199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:28:23.896  [2024-12-10 00:09:28.541205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:45576 len:8 PRP1 0x0 PRP2 0x0
00:28:23.896  [2024-12-10 00:09:28.541212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:28.541256] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422
00:28:23.896  [2024-12-10 00:09:28.541264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state.
00:28:23.896  [2024-12-10 00:09:28.544038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller
00:28:23.896  [2024-12-10 00:09:28.544065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9285d0 (9): Bad file descriptor
00:28:23.896  [2024-12-10 00:09:28.569574] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful.
00:28:23.896      11436.40 IOPS,    44.67 MiB/s
[2024-12-09T23:09:39.753Z]     11479.83 IOPS,    44.84 MiB/s
[2024-12-09T23:09:39.753Z]     11484.00 IOPS,    44.86 MiB/s
[2024-12-09T23:09:39.753Z]     11459.62 IOPS,    44.76 MiB/s
[2024-12-09T23:09:39.753Z]     11432.22 IOPS,    44.66 MiB/s
[2024-12-09T23:09:39.753Z] [2024-12-10 00:09:32.964621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.896  [2024-12-10 00:09:32.964660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:32.964675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:32.964683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:32.964691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:32.964698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:32.964706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:32.964718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:32.964727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:32.964734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:32.964742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.896  [2024-12-10 00:09:32.964749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.896  [2024-12-10 00:09:32.964757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.964764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.964778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.964792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.964806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.964820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.964834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.964848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.964863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.964878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.964891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.964908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.897  [2024-12-10 00:09:32.964923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.897  [2024-12-10 00:09:32.964938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.897  [2024-12-10 00:09:32.964952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.897  [2024-12-10 00:09:32.964967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.897  [2024-12-10 00:09:32.964982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.964990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.897  [2024-12-10 00:09:32.964997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.897  [2024-12-10 00:09:32.965011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.897  [2024-12-10 00:09:32.965226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.897  [2024-12-10 00:09:32.965233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.898  [2024-12-10 00:09:32.965633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.898  [2024-12-10 00:09:32.965641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.965989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.965997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.966008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.966017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.966024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.966032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.966038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.966046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.966052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.966060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.966067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.966075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.966081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.899  [2024-12-10 00:09:32.966089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.899  [2024-12-10 00:09:32.966096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:28:23.900  [2024-12-10 00:09:32.966185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:28:23.900  [2024-12-10 00:09:32.966517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa841f0 is same with the state(6) to be set
00:28:23.900  [2024-12-10 00:09:32.966532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:28:23.900  [2024-12-10 00:09:32.966538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:28:23.900  [2024-12-10 00:09:32.966544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64872 len:8 PRP1 0x0 PRP2 0x0
00:28:23.900  [2024-12-10 00:09:32.966550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.900  [2024-12-10 00:09:32.966593] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420
00:28:23.900  [2024-12-10 00:09:32.966619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:23.900  [2024-12-10 00:09:32.966626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.901  [2024-12-10 00:09:32.966634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:23.901  [2024-12-10 00:09:32.966640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.901  [2024-12-10 00:09:32.966647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:23.901  [2024-12-10 00:09:32.966654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.901  [2024-12-10 00:09:32.966661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:23.901  [2024-12-10 00:09:32.966667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:23.901  [2024-12-10 00:09:32.966673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state.
00:28:23.901  [2024-12-10 00:09:32.969462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller
00:28:23.901  [2024-12-10 00:09:32.969489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9285d0 (9): Bad file descriptor
00:28:23.901  [2024-12-10 00:09:32.996513] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful.
00:28:23.901      11388.00 IOPS,    44.48 MiB/s
[2024-12-09T23:09:39.758Z]     11401.00 IOPS,    44.54 MiB/s
[2024-12-09T23:09:39.758Z]     11409.17 IOPS,    44.57 MiB/s
[2024-12-09T23:09:39.758Z]     11417.77 IOPS,    44.60 MiB/s
[2024-12-09T23:09:39.758Z]     11428.07 IOPS,    44.64 MiB/s
[2024-12-09T23:09:39.758Z]     11439.07 IOPS,    44.68 MiB/s
00:28:23.901                                                                                                  Latency(us)
00:28:23.901  
[2024-12-09T23:09:39.758Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:23.901  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:28:23.901  	 Verification LBA range: start 0x0 length 0x4000
00:28:23.901  	 NVMe0n1             :      15.01   11435.58      44.67     282.87     0.00   10900.39     440.81   13232.03
00:28:23.901  
[2024-12-09T23:09:39.758Z]  ===================================================================================================================
00:28:23.901  
[2024-12-09T23:09:39.758Z]  Total                       :              11435.58      44.67     282.87     0.00   10900.39     440.81   13232.03
00:28:23.901  Received shutdown signal, test time was about 15.000000 seconds
00:28:23.901  
00:28:23.901                                                                                                  Latency(us)
00:28:23.901  
[2024-12-09T23:09:39.758Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:23.901  
[2024-12-09T23:09:39.758Z]  ===================================================================================================================
00:28:23.901  
[2024-12-09T23:09:39.758Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:28:23.901    00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful'
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 ))
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3190060
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3190060 /var/tmp/bdevperf.sock
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3190060 ']'
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:28:23.901  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:28:23.901  [2024-12-10 00:09:39.607401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:28:23.901   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422
00:28:24.159  [2024-12-10 00:09:39.795953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 ***
00:28:24.159   00:09:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:28:24.417  NVMe0n1
00:28:24.417   00:09:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:28:24.676  
00:28:24.676   00:09:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover
00:28:25.249  
00:28:25.249   00:09:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:28:25.250   00:09:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0
00:28:25.511   00:09:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:28:25.511   00:09:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3
00:28:28.808   00:09:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:28:28.808   00:09:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0
00:28:28.808   00:09:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3190873
00:28:28.808   00:09:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:28:28.808   00:09:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3190873
00:28:30.187  {
00:28:30.187    "results": [
00:28:30.187      {
00:28:30.188        "job": "NVMe0n1",
00:28:30.188        "core_mask": "0x1",
00:28:30.188        "workload": "verify",
00:28:30.188        "status": "finished",
00:28:30.188        "verify_range": {
00:28:30.188          "start": 0,
00:28:30.188          "length": 16384
00:28:30.188        },
00:28:30.188        "queue_depth": 128,
00:28:30.188        "io_size": 4096,
00:28:30.188        "runtime": 1.050001,
00:28:30.188        "iops": 11087.608487991916,
00:28:30.188        "mibps": 43.31097065621842,
00:28:30.188        "io_failed": 0,
00:28:30.188        "io_timeout": 0,
00:28:30.188        "avg_latency_us": 11076.382016508374,
00:28:30.188        "min_latency_us": 2153.325714285714,
00:28:30.188        "max_latency_us": 43690.666666666664
00:28:30.188      }
00:28:30.188    ],
00:28:30.188    "core_count": 1
00:28:30.188  }
00:28:30.188   00:09:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:28:30.188  [2024-12-10 00:09:39.222067] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:28:30.188  [2024-12-10 00:09:39.222120] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3190060 ]
00:28:30.188  [2024-12-10 00:09:39.302168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:30.188  [2024-12-10 00:09:39.338936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:28:30.188  [2024-12-10 00:09:41.337912] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421
00:28:30.188  [2024-12-10 00:09:41.337957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:30.188  [2024-12-10 00:09:41.337969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:30.188  [2024-12-10 00:09:41.337976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:30.188  [2024-12-10 00:09:41.337984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:30.188  [2024-12-10 00:09:41.337991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:30.188  [2024-12-10 00:09:41.337997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:30.188  [2024-12-10 00:09:41.338004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:30.188  [2024-12-10 00:09:41.338010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:30.188  [2024-12-10 00:09:41.338017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state.
00:28:30.188  [2024-12-10 00:09:41.338043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller
00:28:30.188  [2024-12-10 00:09:41.338056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22895d0 (9): Bad file descriptor
00:28:30.188  [2024-12-10 00:09:41.471252] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful.
00:28:30.188  Running I/O for 1 seconds...
00:28:30.188      11466.00 IOPS,    44.79 MiB/s
00:28:30.188                                                                                                  Latency(us)
00:28:30.188  
[2024-12-09T23:09:46.045Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:28:30.188  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:28:30.188  	 Verification LBA range: start 0x0 length 0x4000
00:28:30.188  	 NVMe0n1             :       1.05   11087.61      43.31       0.00     0.00   11076.38    2153.33   43690.67
00:28:30.188  
[2024-12-09T23:09:46.045Z]  ===================================================================================================================
00:28:30.188  
[2024-12-09T23:09:46.045Z]  Total                       :              11087.61      43.31       0.00     0.00   11076.38    2153.33   43690.67
00:28:30.188   00:09:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:28:30.188   00:09:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0
00:28:30.188   00:09:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:28:30.446   00:09:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:28:30.446   00:09:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0
00:28:30.705   00:09:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:28:30.963   00:09:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3
00:28:34.254   00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0
00:28:34.254   00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers
00:28:34.254   00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3190060
00:28:34.254   00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3190060 ']'
00:28:34.254   00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3190060
00:28:34.254    00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname
00:28:34.254   00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:34.254    00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3190060
00:28:34.254   00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:28:34.254   00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:28:34.254   00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3190060'
00:28:34.254  killing process with pid 3190060
00:28:34.254   00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3190060
00:28:34.254   00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3190060
00:28:34.254   00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync
00:28:34.254   00:09:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20}
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:28:34.513  rmmod nvme_tcp
00:28:34.513  rmmod nvme_fabrics
00:28:34.513  rmmod nvme_keyring
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3186970 ']'
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3186970
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3186970 ']'
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3186970
00:28:34.513    00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:34.513    00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3186970
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3186970'
00:28:34.513  killing process with pid 3186970
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3186970
00:28:34.513   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3186970
00:28:34.772   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:28:34.772   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:28:34.772   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:28:34.772   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr
00:28:34.772   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save
00:28:34.772   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:28:34.772   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore
00:28:34.772   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:28:34.772   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns
00:28:34.772   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:34.772   00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:34.772    00:09:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:37.306   00:09:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:28:37.306  
00:28:37.306  real	0m38.301s
00:28:37.306  user	2m1.721s
00:28:37.306  sys	0m7.907s
00:28:37.306   00:09:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:37.306   00:09:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x
00:28:37.306  ************************************
00:28:37.306  END TEST nvmf_failover
00:28:37.307  ************************************
00:28:37.307   00:09:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp
00:28:37.307   00:09:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:28:37.307   00:09:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:37.307   00:09:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:28:37.307  ************************************
00:28:37.307  START TEST nvmf_host_discovery
00:28:37.307  ************************************
00:28:37.307   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp
00:28:37.307  * Looking for test storage...
00:28:37.307  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-:
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-:
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<'
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 ))
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:28:37.307  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:37.307  		--rc genhtml_branch_coverage=1
00:28:37.307  		--rc genhtml_function_coverage=1
00:28:37.307  		--rc genhtml_legend=1
00:28:37.307  		--rc geninfo_all_blocks=1
00:28:37.307  		--rc geninfo_unexecuted_blocks=1
00:28:37.307  		
00:28:37.307  		'
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:28:37.307  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:37.307  		--rc genhtml_branch_coverage=1
00:28:37.307  		--rc genhtml_function_coverage=1
00:28:37.307  		--rc genhtml_legend=1
00:28:37.307  		--rc geninfo_all_blocks=1
00:28:37.307  		--rc geninfo_unexecuted_blocks=1
00:28:37.307  		
00:28:37.307  		'
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:28:37.307  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:37.307  		--rc genhtml_branch_coverage=1
00:28:37.307  		--rc genhtml_function_coverage=1
00:28:37.307  		--rc genhtml_legend=1
00:28:37.307  		--rc geninfo_all_blocks=1
00:28:37.307  		--rc geninfo_unexecuted_blocks=1
00:28:37.307  		
00:28:37.307  		'
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:28:37.307  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:37.307  		--rc genhtml_branch_coverage=1
00:28:37.307  		--rc genhtml_function_coverage=1
00:28:37.307  		--rc genhtml_legend=1
00:28:37.307  		--rc geninfo_all_blocks=1
00:28:37.307  		--rc geninfo_unexecuted_blocks=1
00:28:37.307  		
00:28:37.307  		'
00:28:37.307   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:28:37.307    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:28:37.307     00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:28:37.308      00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:37.308      00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:37.308      00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:37.308      00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH
00:28:37.308      00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:37.308    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0
00:28:37.308    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:28:37.308    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:28:37.308    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:28:37.308    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:28:37.308    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:28:37.308    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:28:37.308  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:28:37.308    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:28:37.308    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:28:37.308    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']'
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:37.308    00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable
00:28:37.308   00:09:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=()
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=()
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=()
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=()
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=()
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=()
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=()
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:28:42.585  Found 0000:af:00.0 (0x8086 - 0x159b)
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:28:42.585  Found 0000:af:00.1 (0x8086 - 0x159b)
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]]
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:28:42.585  Found net devices under 0000:af:00.0: cvl_0_0
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:28:42.585   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]]
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:28:42.586  Found net devices under 0000:af:00.1: cvl_0_1
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:28:42.586   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:28:42.845   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:28:42.845   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:28:42.845   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:28:42.845   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:28:42.845   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:28:43.109  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:28:43.109  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms
00:28:43.109  
00:28:43.109  --- 10.0.0.2 ping statistics ---
00:28:43.109  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:43.109  rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:28:43.109  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:28:43.109  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms
00:28:43.109  
00:28:43.109  --- 10.0.0.1 ping statistics ---
00:28:43.109  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:28:43.109  rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:28:43.109   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable
00:28:43.110   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.110   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3195358
00:28:43.110   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3195358
00:28:43.110   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:28:43.110   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3195358 ']'
00:28:43.110   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:28:43.110   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:43.110   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:28:43.110  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:28:43.110   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:43.110   00:09:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.110  [2024-12-10 00:09:58.876613] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:28:43.110  [2024-12-10 00:09:58.876658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:28:43.110  [2024-12-10 00:09:58.951015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:43.371  [2024-12-10 00:09:58.989368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:28:43.371  [2024-12-10 00:09:58.989401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:28:43.371  [2024-12-10 00:09:58.989408] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:28:43.371  [2024-12-10 00:09:58.989414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:28:43.371  [2024-12-10 00:09:58.989418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:28:43.371  [2024-12-10 00:09:58.989893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.371  [2024-12-10 00:09:59.132835] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.371  [2024-12-10 00:09:59.145027] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 ***
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.371  null0
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.371  null1
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3195409
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3195409 /tmp/host.sock
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3195409 ']'
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...'
00:28:43.371  Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable
00:28:43.371   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.371  [2024-12-10 00:09:59.222466] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:28:43.371  [2024-12-10 00:09:59.222509] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3195409 ]
00:28:43.631  [2024-12-10 00:09:59.294021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:28:43.631  [2024-12-10 00:09:59.334393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:28:43.631   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:28:43.631   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0
00:28:43.631   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:28:43.631   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme
00:28:43.631   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.631   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.631   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.631   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test
00:28:43.631   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.631   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.631   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.631   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0
00:28:43.631    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names
00:28:43.631    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:28:43.631    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:28:43.631    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:28:43.631    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.631    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:28:43.631    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.631    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.631   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]]
00:28:43.631    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.890   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]]
00:28:43.890   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0
00:28:43.890   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.890   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.890   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.890   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]]
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.890   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]]
00:28:43.890   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0
00:28:43.890   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.890   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.890   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.890    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.891   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]]
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.891   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]]
00:28:43.891   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:28:43.891   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.891   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:43.891  [2024-12-10 00:09:59.734551] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:28:43.891   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:28:43.891    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:44.150    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]]
00:28:44.150    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list
00:28:44.150    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:28:44.150    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:28:44.150    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:28:44.150    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:44.150    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:44.150    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:28:44.150    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]]
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:28:44.150    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:28:44.150     00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0
00:28:44.150     00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:28:44.150     00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:44.150     00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:44.150     00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:44.150    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0
00:28:44.150    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0
00:28:44.150    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:44.150   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:44.151   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:28:44.151   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:28:44.151   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:44.151   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:44.151   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]'
00:28:44.151     00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:28:44.151     00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:28:44.151     00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:28:44.151     00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:44.151     00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:28:44.151     00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:44.151     00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:28:44.151     00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:44.151    00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]]
00:28:44.151   00:09:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1
00:28:44.717  [2024-12-10 00:10:00.492338] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:28:44.717  [2024-12-10 00:10:00.492361] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:28:44.717  [2024-12-10 00:10:00.492374] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:28:44.975  [2024-12-10 00:10:00.580630] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0
00:28:44.975  [2024-12-10 00:10:00.640193] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420
00:28:44.975  [2024-12-10 00:10:00.640921] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12edfa0:1 started.
00:28:44.975  [2024-12-10 00:10:00.642278] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done
00:28:44.975  [2024-12-10 00:10:00.642295] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:28:44.975  [2024-12-10 00:10:00.649365] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12edfa0 was disconnected and freed. delete nvme_qpair.
00:28:45.239   00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:45.239   00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]'
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:45.239    00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:45.239   00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:45.239   00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]'
00:28:45.239   00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]'
00:28:45.239   00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:45.239   00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:45.239   00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]'
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:28:45.239     00:10:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:45.239     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:45.239    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]]
00:28:45.239   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:45.239   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]'
00:28:45.239   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]'
00:28:45.239   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:45.239   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:45.239   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]'
00:28:45.239     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:28:45.239     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:28:45.239     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:45.240     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:45.240     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:28:45.240     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:28:45.240     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:28:45.240     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:45.240    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]]
00:28:45.240   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:45.240   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1
00:28:45.240   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1
00:28:45.240   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:28:45.240   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:28:45.240   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:45.240   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:45.240   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:28:45.240    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:28:45.240     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0
00:28:45.240     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:45.240     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:45.240     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:45.501    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1
00:28:45.501    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1
00:28:45.501    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]'
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:28:45.501  [2024-12-10 00:10:01.215934] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x12ee320:1 started.
00:28:45.501  [2024-12-10 00:10:01.220821] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x12ee320 was disconnected and freed. delete nvme_qpair.
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:45.501    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:28:45.501    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:45.501    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1
00:28:45.501    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2
00:28:45.501    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:45.501  [2024-12-10 00:10:01.298768] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:28:45.501  [2024-12-10 00:10:01.299214] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer
00:28:45.501  [2024-12-10 00:10:01.299232] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]'
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:28:45.501     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:45.501    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:45.501   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]'
00:28:45.502     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:28:45.760  [2024-12-10 00:10:01.386809] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:45.760    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:28:45.760   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:45.760   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]'
00:28:45.760   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]'
00:28:45.760   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:45.760   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:45.760   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]'
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:28:45.760     00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:45.760    00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]]
00:28:45.760   00:10:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1
00:28:45.760  [2024-12-10 00:10:01.606927] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421
00:28:45.760  [2024-12-10 00:10:01.606963] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done
00:28:45.760  [2024-12-10 00:10:01.606971] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:28:45.760  [2024-12-10 00:10:01.606976] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:28:46.694   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:46.694   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]'
00:28:46.694     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:28:46.694     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:28:46.694     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:28:46.694     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:46.694     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:28:46.694     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:46.694     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:28:46.694     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:46.694    00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]]
00:28:46.694   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:46.694   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0
00:28:46.694   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0
00:28:46.694   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:28:46.694   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:28:46.694   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:46.694   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:46.694   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:28:46.694    00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:28:46.694     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:28:46.694     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:28:46.694     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:46.694     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:46.694     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:46.954    00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0
00:28:46.954    00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2
00:28:46.954    00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:28:46.954   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:46.954   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:28:46.954   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:46.954   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:46.954  [2024-12-10 00:10:02.562444] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer
00:28:46.954  [2024-12-10 00:10:02.562464] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:28:46.954   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:46.954   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:28:46.954   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]'
00:28:46.954   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:46.954   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:46.954   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]'
00:28:46.954  [2024-12-10 00:10:02.570293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:46.954  [2024-12-10 00:10:02.570309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:46.954  [2024-12-10 00:10:02.570317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:46.954  [2024-12-10 00:10:02.570324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:46.954  [2024-12-10 00:10:02.570352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:46.954  [2024-12-10 00:10:02.570359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:46.954  [2024-12-10 00:10:02.570366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:28:46.954  [2024-12-10 00:10:02.570372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:28:46.954  [2024-12-10 00:10:02.570379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12be410 is same with the state(6) to be set
00:28:46.954     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:28:46.954     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:28:46.954     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:28:46.954     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:46.954     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:28:46.954     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:46.954     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:28:46.954  [2024-12-10 00:10:02.580306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be410 (9): Bad file descriptor
00:28:46.954     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:46.954  [2024-12-10 00:10:02.590340] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:28:46.954  [2024-12-10 00:10:02.590351] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:28:46.954  [2024-12-10 00:10:02.590357] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:28:46.954  [2024-12-10 00:10:02.590362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:28:46.954  [2024-12-10 00:10:02.590377] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:28:46.954  [2024-12-10 00:10:02.590583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:28:46.954  [2024-12-10 00:10:02.590596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12be410 with addr=10.0.0.2, port=4420
00:28:46.954  [2024-12-10 00:10:02.590604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12be410 is same with the state(6) to be set
00:28:46.954  [2024-12-10 00:10:02.590615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be410 (9): Bad file descriptor
00:28:46.954  [2024-12-10 00:10:02.590632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:28:46.954  [2024-12-10 00:10:02.590639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:28:46.954  [2024-12-10 00:10:02.590646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:28:46.954  [2024-12-10 00:10:02.590652] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:28:46.954  [2024-12-10 00:10:02.590657] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:28:46.954  [2024-12-10 00:10:02.590661] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:28:46.954  [2024-12-10 00:10:02.600406] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:28:46.954  [2024-12-10 00:10:02.600420] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:28:46.954  [2024-12-10 00:10:02.600424] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:28:46.954  [2024-12-10 00:10:02.600428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:28:46.954  [2024-12-10 00:10:02.600441] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:28:46.954  [2024-12-10 00:10:02.600545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:28:46.954  [2024-12-10 00:10:02.600556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12be410 with addr=10.0.0.2, port=4420
00:28:46.954  [2024-12-10 00:10:02.600563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12be410 is same with the state(6) to be set
00:28:46.954  [2024-12-10 00:10:02.600573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be410 (9): Bad file descriptor
00:28:46.954  [2024-12-10 00:10:02.600587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:28:46.954  [2024-12-10 00:10:02.600593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:28:46.954  [2024-12-10 00:10:02.600600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:28:46.954  [2024-12-10 00:10:02.600605] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:28:46.954  [2024-12-10 00:10:02.600609] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:28:46.954  [2024-12-10 00:10:02.600613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:28:46.954  [2024-12-10 00:10:02.610472] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:28:46.954  [2024-12-10 00:10:02.610484] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:28:46.954  [2024-12-10 00:10:02.610488] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:28:46.954  [2024-12-10 00:10:02.610492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:28:46.955  [2024-12-10 00:10:02.610505] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:28:46.955  [2024-12-10 00:10:02.610698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:28:46.955  [2024-12-10 00:10:02.610710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12be410 with addr=10.0.0.2, port=4420
00:28:46.955  [2024-12-10 00:10:02.610718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12be410 is same with the state(6) to be set
00:28:46.955  [2024-12-10 00:10:02.610728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be410 (9): Bad file descriptor
00:28:46.955  [2024-12-10 00:10:02.610743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:28:46.955  [2024-12-10 00:10:02.610750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:28:46.955  [2024-12-10 00:10:02.610756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:28:46.955  [2024-12-10 00:10:02.610761] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:28:46.955  [2024-12-10 00:10:02.610766] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:28:46.955  [2024-12-10 00:10:02.610769] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:28:46.955    00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:28:46.955   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:46.955   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:28:46.955   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]'
00:28:46.955   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:46.955   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:46.955   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]'
00:28:46.955  [2024-12-10 00:10:02.620535] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:28:46.955  [2024-12-10 00:10:02.620548] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:28:46.955  [2024-12-10 00:10:02.620552] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:28:46.955  [2024-12-10 00:10:02.620556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:28:46.955  [2024-12-10 00:10:02.620568] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:28:46.955  [2024-12-10 00:10:02.620749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:28:46.955  [2024-12-10 00:10:02.620760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12be410 with addr=10.0.0.2, port=4420
00:28:46.955  [2024-12-10 00:10:02.620766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12be410 is same with the state(6) to be set
00:28:46.955  [2024-12-10 00:10:02.620776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be410 (9): Bad file descriptor
00:28:46.955  [2024-12-10 00:10:02.620796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:28:46.955  [2024-12-10 00:10:02.620803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:28:46.955  [2024-12-10 00:10:02.620809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:28:46.955  [2024-12-10 00:10:02.620814] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:28:46.955  [2024-12-10 00:10:02.620818] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:28:46.955  [2024-12-10 00:10:02.620822] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:28:46.955     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:28:46.955     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:28:46.955     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:28:46.955     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:28:46.955     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:46.955     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:46.955     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:28:46.955  [2024-12-10 00:10:02.630598] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:28:46.955  [2024-12-10 00:10:02.630612] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:28:46.955  [2024-12-10 00:10:02.630616] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:28:46.955  [2024-12-10 00:10:02.630620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:28:46.955  [2024-12-10 00:10:02.630636] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:28:46.955  [2024-12-10 00:10:02.630747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:28:46.955  [2024-12-10 00:10:02.630759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12be410 with addr=10.0.0.2, port=4420
00:28:46.955  [2024-12-10 00:10:02.630765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12be410 is same with the state(6) to be set
00:28:46.955  [2024-12-10 00:10:02.630775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be410 (9): Bad file descriptor
00:28:46.955  [2024-12-10 00:10:02.630783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:28:46.955  [2024-12-10 00:10:02.630789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:28:46.955  [2024-12-10 00:10:02.630795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:28:46.955  [2024-12-10 00:10:02.630800] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:28:46.955  [2024-12-10 00:10:02.630804] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:28:46.955  [2024-12-10 00:10:02.630808] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:28:46.955  [2024-12-10 00:10:02.640667] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:28:46.955  [2024-12-10 00:10:02.640677] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:28:46.955  [2024-12-10 00:10:02.640681] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:28:46.955  [2024-12-10 00:10:02.640685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:28:46.955  [2024-12-10 00:10:02.640697] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:28:46.955  [2024-12-10 00:10:02.640861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:28:46.955  [2024-12-10 00:10:02.640872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12be410 with addr=10.0.0.2, port=4420
00:28:46.955  [2024-12-10 00:10:02.640878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12be410 is same with the state(6) to be set
00:28:46.955  [2024-12-10 00:10:02.640888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be410 (9): Bad file descriptor
00:28:46.955  [2024-12-10 00:10:02.640902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:28:46.955  [2024-12-10 00:10:02.640908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:28:46.955  [2024-12-10 00:10:02.640915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:28:46.955  [2024-12-10 00:10:02.640920] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:28:46.955  [2024-12-10 00:10:02.640924] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:28:46.955  [2024-12-10 00:10:02.640928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:28:46.955  [2024-12-10 00:10:02.650727] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:28:46.955  [2024-12-10 00:10:02.650739] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:28:46.955  [2024-12-10 00:10:02.650742] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:28:46.955  [2024-12-10 00:10:02.650749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:28:46.955  [2024-12-10 00:10:02.650762] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:28:46.955  [2024-12-10 00:10:02.651040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:28:46.956  [2024-12-10 00:10:02.651052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12be410 with addr=10.0.0.2, port=4420
00:28:46.956  [2024-12-10 00:10:02.651059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12be410 is same with the state(6) to be set
00:28:46.956  [2024-12-10 00:10:02.651069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be410 (9): Bad file descriptor
00:28:46.956  [2024-12-10 00:10:02.651078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:28:46.956  [2024-12-10 00:10:02.651084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:28:46.956  [2024-12-10 00:10:02.651090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:28:46.956  [2024-12-10 00:10:02.651095] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:28:46.956  [2024-12-10 00:10:02.651099] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:28:46.956  [2024-12-10 00:10:02.651103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:28:46.956     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:46.956  [2024-12-10 00:10:02.660793] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:28:46.956  [2024-12-10 00:10:02.660803] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:28:46.956  [2024-12-10 00:10:02.660807] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:28:46.956  [2024-12-10 00:10:02.660810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:28:46.956  [2024-12-10 00:10:02.660823] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:28:46.956  [2024-12-10 00:10:02.661064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:28:46.956  [2024-12-10 00:10:02.661075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12be410 with addr=10.0.0.2, port=4420
00:28:46.956  [2024-12-10 00:10:02.661082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12be410 is same with the state(6) to be set
00:28:46.956  [2024-12-10 00:10:02.661091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be410 (9): Bad file descriptor
00:28:46.956  [2024-12-10 00:10:02.661105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:28:46.956  [2024-12-10 00:10:02.661111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:28:46.956  [2024-12-10 00:10:02.661117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:28:46.956  [2024-12-10 00:10:02.661123] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:28:46.956  [2024-12-10 00:10:02.661127] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:28:46.956  [2024-12-10 00:10:02.661131] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:28:46.956    00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:28:46.956   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:46.956   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]'
00:28:46.956   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]'
00:28:46.956   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:46.956   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:46.956   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]'
00:28:46.956  [2024-12-10 00:10:02.670853] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:28:46.956  [2024-12-10 00:10:02.670865] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:28:46.956  [2024-12-10 00:10:02.670869] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:28:46.956  [2024-12-10 00:10:02.670873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:28:46.956  [2024-12-10 00:10:02.670885] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:28:46.956  [2024-12-10 00:10:02.670967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:28:46.956  [2024-12-10 00:10:02.670977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12be410 with addr=10.0.0.2, port=4420
00:28:46.956  [2024-12-10 00:10:02.670984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12be410 is same with the state(6) to be set
00:28:46.956  [2024-12-10 00:10:02.670993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be410 (9): Bad file descriptor
00:28:46.956  [2024-12-10 00:10:02.671001] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:28:46.956  [2024-12-10 00:10:02.671007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:28:46.956  [2024-12-10 00:10:02.671013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:28:46.956  [2024-12-10 00:10:02.671019] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:28:46.956  [2024-12-10 00:10:02.671024] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:28:46.956  [2024-12-10 00:10:02.671028] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:28:46.956     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:28:46.956     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:28:46.956     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:46.956     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:46.956     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:28:46.956     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:28:46.956     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:28:46.956  [2024-12-10 00:10:02.680914] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:28:46.956  [2024-12-10 00:10:02.680926] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:28:46.956  [2024-12-10 00:10:02.680931] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:28:46.956  [2024-12-10 00:10:02.680934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:28:46.956  [2024-12-10 00:10:02.680953] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:28:46.956  [2024-12-10 00:10:02.681132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:28:46.956  [2024-12-10 00:10:02.681146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12be410 with addr=10.0.0.2, port=4420
00:28:46.956  [2024-12-10 00:10:02.681152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12be410 is same with the state(6) to be set
00:28:46.956  [2024-12-10 00:10:02.681162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12be410 (9): Bad file descriptor
00:28:46.956  [2024-12-10 00:10:02.681192] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:28:46.956  [2024-12-10 00:10:02.681199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:28:46.956  [2024-12-10 00:10:02.681205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:28:46.956  [2024-12-10 00:10:02.681211] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:28:46.956  [2024-12-10 00:10:02.681215] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:28:46.956  [2024-12-10 00:10:02.681219] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:28:46.956     00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:46.956  [2024-12-10 00:10:02.689606] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found
00:28:46.956  [2024-12-10 00:10:02.689624] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:28:46.956    00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]]
00:28:46.956   00:10:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1
00:28:47.892   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:47.892   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]'
00:28:47.892     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0
00:28:47.892     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0
00:28:47.892     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid'
00:28:47.892     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:47.892     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n
00:28:47.892     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:47.892     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs
00:28:47.892     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:48.155    00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]]
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:28:48.155    00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:48.155    00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0
00:28:48.155    00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2
00:28:48.155    00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]'
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]'
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]'
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name'
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:48.155    00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]]
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]'
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]'
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]'
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:48.155    00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]]
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))'
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))'
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- ))
00:28:48.155   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))'
00:28:48.155    00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count
00:28:48.155     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2
00:28:48.156     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length'
00:28:48.156     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:48.156     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:48.156     00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:48.156    00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2
00:28:48.156    00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4
00:28:48.156    00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count ))
00:28:48.156   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0
00:28:48.156   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:28:48.156   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:48.156   00:10:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:49.534  [2024-12-10 00:10:05.042631] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:28:49.534  [2024-12-10 00:10:05.042647] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:28:49.534  [2024-12-10 00:10:05.042658] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:28:49.534  [2024-12-10 00:10:05.128913] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0
00:28:49.534  [2024-12-10 00:10:05.388107] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421
00:28:49.534  [2024-12-10 00:10:05.388692] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x12bbc10:1 started.
00:28:49.534  [2024-12-10 00:10:05.390253] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done
00:28:49.534  [2024-12-10 00:10:05.390277] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again
00:28:49.534  [2024-12-10 00:10:05.391457] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x12bbc10 was disconnected and freed. delete nvme_qpair.
00:28:49.534   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:49.534   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:28:49.868   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0
00:28:49.868   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:28:49.868   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:28:49.868   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:49.868    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:28:49.868   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:49.868   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:28:49.868   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:49.868   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:49.868  request:
00:28:49.868  {
00:28:49.868  "name": "nvme",
00:28:49.868  "trtype": "tcp",
00:28:49.868  "traddr": "10.0.0.2",
00:28:49.868  "adrfam": "ipv4",
00:28:49.868  "trsvcid": "8009",
00:28:49.868  "hostnqn": "nqn.2021-12.io.spdk:test",
00:28:49.868  "wait_for_attach": true,
00:28:49.868  "method": "bdev_nvme_start_discovery",
00:28:49.868  "req_id": 1
00:28:49.868  }
00:28:49.868  Got JSON-RPC error response
00:28:49.868  response:
00:28:49.868  {
00:28:49.868  "code": -17,
00:28:49.868  "message": "File exists"
00:28:49.868  }
00:28:49.868   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:28:49.868   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1
00:28:49.868   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:28:49.868   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:28:49.868   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:28:49.868    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs
00:28:49.868    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:28:49.868    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name'
00:28:49.868    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:49.868    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort
00:28:49.868    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]]
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:49.869  request:
00:28:49.869  {
00:28:49.869  "name": "nvme_second",
00:28:49.869  "trtype": "tcp",
00:28:49.869  "traddr": "10.0.0.2",
00:28:49.869  "adrfam": "ipv4",
00:28:49.869  "trsvcid": "8009",
00:28:49.869  "hostnqn": "nqn.2021-12.io.spdk:test",
00:28:49.869  "wait_for_attach": true,
00:28:49.869  "method": "bdev_nvme_start_discovery",
00:28:49.869  "req_id": 1
00:28:49.869  }
00:28:49.869  Got JSON-RPC error response
00:28:49.869  response:
00:28:49.869  {
00:28:49.869  "code": -17,
00:28:49.869  "message": "File exists"
00:28:49.869  }
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name'
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]]
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name'
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]]
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:49.869    00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:49.869   00:10:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:50.844  [2024-12-10 00:10:06.633961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:28:50.844  [2024-12-10 00:10:06.633989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ed520 with addr=10.0.0.2, port=8010
00:28:50.844  [2024-12-10 00:10:06.634004] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:28:50.844  [2024-12-10 00:10:06.634026] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:28:50.844  [2024-12-10 00:10:06.634033] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect
00:28:51.780  [2024-12-10 00:10:07.636364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:28:51.780  [2024-12-10 00:10:07.636389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12d4e90 with addr=10.0.0.2, port=8010
00:28:51.780  [2024-12-10 00:10:07.636400] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:28:51.780  [2024-12-10 00:10:07.636406] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:28:51.780  [2024-12-10 00:10:07.636412] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect
00:28:53.155  [2024-12-10 00:10:08.638540] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr
00:28:53.155  request:
00:28:53.155  {
00:28:53.155  "name": "nvme_second",
00:28:53.155  "trtype": "tcp",
00:28:53.155  "traddr": "10.0.0.2",
00:28:53.155  "adrfam": "ipv4",
00:28:53.155  "trsvcid": "8010",
00:28:53.155  "hostnqn": "nqn.2021-12.io.spdk:test",
00:28:53.155  "wait_for_attach": false,
00:28:53.155  "attach_timeout_ms": 3000,
00:28:53.155  "method": "bdev_nvme_start_discovery",
00:28:53.155  "req_id": 1
00:28:53.155  }
00:28:53.155  Got JSON-RPC error response
00:28:53.155  response:
00:28:53.155  {
00:28:53.155  "code": -110,
00:28:53.155  "message": "Connection timed out"
00:28:53.155  }
00:28:53.155   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:28:53.155   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1
00:28:53.155   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:28:53.155   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:28:53.155   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:28:53.155    00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs
00:28:53.155    00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info
00:28:53.156    00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name'
00:28:53.156    00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable
00:28:53.156    00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort
00:28:53.156    00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:53.156    00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs
00:28:53.156    00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]]
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3195409
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20}
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:28:53.156  rmmod nvme_tcp
00:28:53.156  rmmod nvme_fabrics
00:28:53.156  rmmod nvme_keyring
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3195358 ']'
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3195358
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3195358 ']'
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3195358
00:28:53.156    00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:28:53.156    00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3195358
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3195358'
00:28:53.156  killing process with pid 3195358
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3195358
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3195358
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:53.156   00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:53.156    00:10:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:55.691   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:28:55.691  
00:28:55.691  real	0m18.417s
00:28:55.691  user	0m22.889s
00:28:55.691  sys	0m5.772s
00:28:55.691   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable
00:28:55.691   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x
00:28:55.691  ************************************
00:28:55.691  END TEST nvmf_host_discovery
00:28:55.691  ************************************
00:28:55.691   00:10:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp
00:28:55.691   00:10:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:28:55.691   00:10:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:28:55.691   00:10:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:28:55.691  ************************************
00:28:55.691  START TEST nvmf_host_multipath_status
00:28:55.691  ************************************
00:28:55.691   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp
00:28:55.691  * Looking for test storage...
00:28:55.691  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:28:55.691    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:28:55.691     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version
00:28:55.691     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:28:55.691    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:28:55.691    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:28:55.691    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l
00:28:55.691    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l
00:28:55.691    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-:
00:28:55.691    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1
00:28:55.691    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-:
00:28:55.691    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2
00:28:55.691    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<'
00:28:55.691    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2
00:28:55.691    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1
00:28:55.691    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 ))
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:28:55.692  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:55.692  		--rc genhtml_branch_coverage=1
00:28:55.692  		--rc genhtml_function_coverage=1
00:28:55.692  		--rc genhtml_legend=1
00:28:55.692  		--rc geninfo_all_blocks=1
00:28:55.692  		--rc geninfo_unexecuted_blocks=1
00:28:55.692  		
00:28:55.692  		'
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:28:55.692  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:55.692  		--rc genhtml_branch_coverage=1
00:28:55.692  		--rc genhtml_function_coverage=1
00:28:55.692  		--rc genhtml_legend=1
00:28:55.692  		--rc geninfo_all_blocks=1
00:28:55.692  		--rc geninfo_unexecuted_blocks=1
00:28:55.692  		
00:28:55.692  		'
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:28:55.692  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:55.692  		--rc genhtml_branch_coverage=1
00:28:55.692  		--rc genhtml_function_coverage=1
00:28:55.692  		--rc genhtml_legend=1
00:28:55.692  		--rc geninfo_all_blocks=1
00:28:55.692  		--rc geninfo_unexecuted_blocks=1
00:28:55.692  		
00:28:55.692  		'
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:28:55.692  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:28:55.692  		--rc genhtml_branch_coverage=1
00:28:55.692  		--rc genhtml_function_coverage=1
00:28:55.692  		--rc genhtml_legend=1
00:28:55.692  		--rc geninfo_all_blocks=1
00:28:55.692  		--rc geninfo_unexecuted_blocks=1
00:28:55.692  		
00:28:55.692  		'
00:28:55.692   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:28:55.692     00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:28:55.692      00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:55.692      00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:55.692      00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:55.692      00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH
00:28:55.692      00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:28:55.692    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:28:55.692  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:28:55.693    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:28:55.693    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:28:55.693    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:28:55.693    00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable
00:28:55.693   00:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=()
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=()
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=()
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=()
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=()
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=()
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=()
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:29:02.261  Found 0000:af:00.0 (0x8086 - 0x159b)
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:29:02.261  Found 0000:af:00.1 (0x8086 - 0x159b)
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:29:02.261  Found net devices under 0000:af:00.0: cvl_0_0
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]]
00:29:02.261   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:29:02.262  Found net devices under 0000:af:00.1: cvl_0_1
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:29:02.262   00:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:29:02.262  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:29:02.262  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms
00:29:02.262  
00:29:02.262  --- 10.0.0.2 ping statistics ---
00:29:02.262  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:02.262  rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:29:02.262  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:29:02.262  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms
00:29:02.262  
00:29:02.262  --- 10.0.0.1 ping statistics ---
00:29:02.262  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:02.262  rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3200617
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3200617
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3200617 ']'
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:29:02.262  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:02.262   00:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:29:02.262  [2024-12-10 00:10:17.298479] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:29:02.262  [2024-12-10 00:10:17.298521] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:02.262  [2024-12-10 00:10:17.373622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:29:02.262  [2024-12-10 00:10:17.414228] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:29:02.262  [2024-12-10 00:10:17.414262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:29:02.262  [2024-12-10 00:10:17.414268] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:29:02.262  [2024-12-10 00:10:17.414274] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:29:02.262  [2024-12-10 00:10:17.414279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:29:02.262  [2024-12-10 00:10:17.415344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:29:02.262  [2024-12-10 00:10:17.415345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:29:02.521   00:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:02.521   00:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0
00:29:02.521   00:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:29:02.521   00:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable
00:29:02.521   00:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:29:02.521   00:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:29:02.521   00:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3200617
00:29:02.521   00:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:29:02.521  [2024-12-10 00:10:18.346177] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:29:02.521   00:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
00:29:02.780  Malloc0
00:29:02.780   00:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2
00:29:03.039   00:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:29:03.298   00:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:29:03.557  [2024-12-10 00:10:19.159876] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:29:03.557   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:29:03.557  [2024-12-10 00:10:19.352427] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:29:03.557   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90
00:29:03.557   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3200875
00:29:03.557   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:29:03.557   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3200875 /var/tmp/bdevperf.sock
00:29:03.557   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3200875 ']'
00:29:03.557   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:29:03.557   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:03.557   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:29:03.557  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:29:03.557   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:03.557   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:29:03.815   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:03.815   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0
00:29:03.815   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1
00:29:04.073   00:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10
00:29:04.640  Nvme0n1
00:29:04.640   00:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10
00:29:04.899  Nvme0n1
00:29:04.899   00:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests
00:29:04.899   00:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2
00:29:07.434   00:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized
00:29:07.434   00:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized
00:29:07.434   00:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:29:07.434   00:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1
00:29:08.372   00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true
00:29:08.372   00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:29:08.372    00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:08.372    00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:29:08.631   00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:08.631   00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:29:08.631    00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:08.631    00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:29:08.889   00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:08.889   00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:29:08.889    00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:08.889    00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:29:09.148   00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:09.148   00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:29:09.148    00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:09.148    00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:29:09.148   00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:09.148   00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:29:09.148    00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:09.148    00:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:29:09.408   00:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:09.408   00:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:29:09.408    00:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:09.408    00:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:29:09.669   00:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:09.669   00:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized
00:29:09.669   00:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:29:09.930   00:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:29:10.189   00:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1
00:29:11.123   00:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true
00:29:11.123   00:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:29:11.123    00:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:11.123    00:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:29:11.382   00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:11.382   00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:29:11.382    00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:11.382    00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:29:11.640   00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:11.640   00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:29:11.640    00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:11.640    00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:29:11.899   00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:11.899   00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:29:11.899    00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:11.899    00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:29:11.899   00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:11.899   00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:29:11.899    00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:11.899    00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:29:12.158   00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:12.158   00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:29:12.158    00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:12.158    00:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:29:12.417   00:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:12.417   00:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized
00:29:12.417   00:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:29:12.676   00:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized
00:29:12.935   00:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1
00:29:13.874   00:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true
00:29:13.874   00:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:29:13.874    00:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:13.874    00:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:29:14.134   00:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:14.134   00:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:29:14.134    00:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:14.134    00:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:29:14.134   00:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:14.134   00:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:29:14.134    00:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:14.134    00:10:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:29:14.392   00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:14.392   00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:29:14.392    00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:29:14.392    00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:14.650   00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:14.650   00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:29:14.650    00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:29:14.650    00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:14.909   00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:14.909   00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:29:14.909    00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:14.909    00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:29:15.168   00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:15.168   00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible
00:29:15.168   00:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:29:15.168   00:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible
00:29:15.426   00:10:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1
00:29:16.807   00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false
00:29:16.807   00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:29:16.807    00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:16.807    00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:29:16.807   00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:16.808   00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:29:16.808    00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:16.808    00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:29:16.808   00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:16.808   00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:29:16.808    00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:16.808    00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:29:17.066   00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:17.066   00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:29:17.066    00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:29:17.066    00:10:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:17.324   00:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:17.324   00:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:29:17.324    00:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:17.324    00:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:29:17.582   00:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:17.582   00:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false
00:29:17.582    00:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:17.582    00:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:29:17.841   00:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:17.841   00:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible
00:29:17.841   00:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible
00:29:18.100   00:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible
00:29:18.100   00:10:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1
00:29:19.476   00:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false
00:29:19.476   00:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:29:19.476    00:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:19.476    00:10:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:29:19.476   00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:19.476   00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:29:19.476    00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:19.476    00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:29:19.738   00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:19.738   00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:29:19.738    00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:19.738    00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:29:19.738   00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:19.738   00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:29:19.738    00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:19.738    00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:29:19.997   00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:19.997   00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false
00:29:19.997    00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:19.997    00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:29:20.256   00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:20.256   00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false
00:29:20.256    00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:20.256    00:10:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:29:20.514   00:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:20.514   00:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized
00:29:20.514   00:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible
00:29:20.514   00:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:29:20.773   00:10:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1
00:29:21.713   00:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true
00:29:21.713   00:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:29:21.713    00:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:21.713    00:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:29:21.972   00:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:21.972   00:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:29:21.972    00:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:21.972    00:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:29:22.231   00:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:22.231   00:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:29:22.231    00:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:22.231    00:10:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:29:22.489   00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:22.489   00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:29:22.489    00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:22.489    00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:29:22.749   00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:22.749   00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false
00:29:22.749    00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:22.749    00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:29:22.749   00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:22.749   00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:29:22.749    00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:22.749    00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:29:23.007   00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:23.007   00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active
00:29:23.266   00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized
00:29:23.266   00:10:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized
00:29:23.525   00:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:29:23.784   00:10:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1
00:29:24.720   00:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true
00:29:24.720   00:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:29:24.720    00:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:24.720    00:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:29:24.978   00:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:24.978   00:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:29:24.978    00:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:29:24.978    00:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:24.978   00:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:24.978   00:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:29:25.237    00:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:25.237    00:10:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:29:25.237   00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:25.237   00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:29:25.237    00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:25.237    00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:29:25.496   00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:25.496   00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:29:25.496    00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:25.496    00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:29:25.755   00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:25.755   00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:29:25.755    00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:29:25.755    00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:26.014   00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:26.014   00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized
00:29:26.014   00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:29:26.273   00:10:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized
00:29:26.273   00:10:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1
00:29:27.649   00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true
00:29:27.649   00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false
00:29:27.649    00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:27.649    00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:29:27.649   00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:27.649   00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:29:27.649    00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:27.649    00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:29:27.908   00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:27.908   00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:29:27.908    00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:27.908    00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:29:27.908   00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:27.908   00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:29:27.908    00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:27.908    00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:29:28.167   00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:28.167   00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:29:28.167    00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:28.167    00:10:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:29:28.426   00:10:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:28.426   00:10:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:29:28.426    00:10:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:28.426    00:10:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:29:28.687   00:10:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:28.687   00:10:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized
00:29:28.687   00:10:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:29:28.946   00:10:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized
00:29:29.205   00:10:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1
00:29:30.141   00:10:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true
00:29:30.141   00:10:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:29:30.141    00:10:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:30.141    00:10:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:29:30.401   00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:30.401   00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true
00:29:30.401    00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:30.401    00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:29:30.401   00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:30.401   00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:29:30.401    00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:30.401    00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:29:30.660   00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:30.660   00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:29:30.660    00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:30.660    00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:29:30.918   00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:30.918   00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:29:30.918    00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:30.918    00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:29:31.178   00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:31.178   00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true
00:29:31.178    00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:31.178    00:10:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:29:31.437   00:10:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:31.437   00:10:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible
00:29:31.437   00:10:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized
00:29:31.695   00:10:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible
00:29:31.695   00:10:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1
00:29:33.072   00:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false
00:29:33.072   00:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true
00:29:33.072    00:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:33.072    00:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current'
00:29:33.072   00:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:33.072   00:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false
00:29:33.072    00:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:33.072    00:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current'
00:29:33.331   00:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:33.331   00:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true
00:29:33.331    00:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:33.331    00:10:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected'
00:29:33.331   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:33.331   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true
00:29:33.331    00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:33.331    00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected'
00:29:33.589   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:33.589   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true
00:29:33.589    00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:33.589    00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible'
00:29:33.848   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]]
00:29:33.848   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false
00:29:33.848    00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths
00:29:33.848    00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible'
00:29:34.107   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]]
00:29:34.107   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3200875
00:29:34.107   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3200875 ']'
00:29:34.107   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3200875
00:29:34.107    00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname
00:29:34.107   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:34.107    00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3200875
00:29:34.107   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:29:34.107   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:29:34.107   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3200875'
00:29:34.107  killing process with pid 3200875
00:29:34.107   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3200875
00:29:34.107   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3200875
00:29:34.107  {
00:29:34.107    "results": [
00:29:34.107      {
00:29:34.107        "job": "Nvme0n1",
00:29:34.107        "core_mask": "0x4",
00:29:34.107        "workload": "verify",
00:29:34.107        "status": "terminated",
00:29:34.107        "verify_range": {
00:29:34.107          "start": 0,
00:29:34.107          "length": 16384
00:29:34.107        },
00:29:34.107        "queue_depth": 128,
00:29:34.107        "io_size": 4096,
00:29:34.107        "runtime": 28.979232,
00:29:34.107        "iops": 10691.380641143285,
00:29:34.107        "mibps": 41.76320562946596,
00:29:34.107        "io_failed": 0,
00:29:34.107        "io_timeout": 0,
00:29:34.107        "avg_latency_us": 11952.689561729181,
00:29:34.107        "min_latency_us": 628.0533333333333,
00:29:34.107        "max_latency_us": 3019898.88
00:29:34.107      }
00:29:34.107    ],
00:29:34.107    "core_count": 1
00:29:34.107  }
00:29:34.385   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3200875
00:29:34.385   00:10:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:29:34.385  [2024-12-10 00:10:19.427660] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:29:34.385  [2024-12-10 00:10:19.427709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3200875 ]
00:29:34.385  [2024-12-10 00:10:19.503525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:34.385  [2024-12-10 00:10:19.545154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:29:34.385  Running I/O for 90 seconds...
00:29:34.385      11643.00 IOPS,    45.48 MiB/s
[2024-12-09T23:10:50.242Z]     11694.50 IOPS,    45.68 MiB/s
[2024-12-09T23:10:50.242Z]     11728.00 IOPS,    45.81 MiB/s
[2024-12-09T23:10:50.242Z]     11721.50 IOPS,    45.79 MiB/s
[2024-12-09T23:10:50.242Z]     11698.00 IOPS,    45.70 MiB/s
[2024-12-09T23:10:50.243Z]     11688.50 IOPS,    45.66 MiB/s
[2024-12-09T23:10:50.243Z]     11667.86 IOPS,    45.58 MiB/s
[2024-12-09T23:10:50.243Z]     11647.62 IOPS,    45.50 MiB/s
[2024-12-09T23:10:50.243Z]     11640.44 IOPS,    45.47 MiB/s
[2024-12-09T23:10:50.243Z]     11628.20 IOPS,    45.42 MiB/s
[2024-12-09T23:10:50.243Z]     11622.91 IOPS,    45.40 MiB/s
[2024-12-09T23:10:50.243Z]     11612.92 IOPS,    45.36 MiB/s
[2024-12-09T23:10:50.243Z] [2024-12-10 00:10:33.692362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.386  [2024-12-10 00:10:33.692398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.692729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.692735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.693464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.693479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.693493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.693500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.693513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.693519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.693533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.693540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.693552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.693563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.693576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.693582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.693596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.693602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.693616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.693622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.693635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.693642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.693655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.693662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.693674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.693681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.693694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.693700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:29:34.386  [2024-12-10 00:10:33.693713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.386  [2024-12-10 00:10:33.693719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.693732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.693739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.693752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.693758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.693771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.693778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.693791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.693798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.693848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.693856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.693870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.693877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.693890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.693897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.693910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.693917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.693931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.693937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.693951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.693957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.693970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.693977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.693990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.693997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.387  [2024-12-10 00:10:33.694389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:29:34.387  [2024-12-10 00:10:33.694403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.694984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.694990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.695006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.695013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.695029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.695035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.695051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.695058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.695073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.695080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.695095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.695102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.695118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:16256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.695124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:29:34.388  [2024-12-10 00:10:33.695140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.388  [2024-12-10 00:10:33.695146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.389  [2024-12-10 00:10:33.695866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:29:34.389  [2024-12-10 00:10:33.695882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:33.695889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:33.695905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:33.695911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:33.695927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:16536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:33.695934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:33.695952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:33.695958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:29:34.390      11492.08 IOPS,    44.89 MiB/s
[2024-12-09T23:10:50.247Z]     10671.21 IOPS,    41.68 MiB/s
[2024-12-09T23:10:50.247Z]      9959.80 IOPS,    38.91 MiB/s
[2024-12-09T23:10:50.247Z]      9435.12 IOPS,    36.86 MiB/s
[2024-12-09T23:10:50.247Z]      9557.29 IOPS,    37.33 MiB/s
[2024-12-09T23:10:50.247Z]      9657.50 IOPS,    37.72 MiB/s
[2024-12-09T23:10:50.247Z]      9812.05 IOPS,    38.33 MiB/s
[2024-12-09T23:10:50.247Z]      9989.65 IOPS,    39.02 MiB/s
[2024-12-09T23:10:50.247Z]     10159.62 IOPS,    39.69 MiB/s
[2024-12-09T23:10:50.247Z]     10226.95 IOPS,    39.95 MiB/s
[2024-12-09T23:10:50.247Z]     10279.09 IOPS,    40.15 MiB/s
[2024-12-09T23:10:50.247Z]     10329.21 IOPS,    40.35 MiB/s
[2024-12-09T23:10:50.247Z]     10452.84 IOPS,    40.83 MiB/s
[2024-12-09T23:10:50.247Z]     10566.35 IOPS,    41.27 MiB/s
[2024-12-09T23:10:50.247Z] [2024-12-10 00:10:47.503422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:29:34.390  [2024-12-10 00:10:47.503957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.390  [2024-12-10 00:10:47.503964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.503976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.503983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.503996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.391  [2024-12-10 00:10:47.504251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.391  [2024-12-10 00:10:47.504623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.391  [2024-12-10 00:10:47.504644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.391  [2024-12-10 00:10:47.504754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.391  [2024-12-10 00:10:47.504773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.391  [2024-12-10 00:10:47.504793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.391  [2024-12-10 00:10:47.504814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.391  [2024-12-10 00:10:47.504833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:29:34.391  [2024-12-10 00:10:47.504845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.392  [2024-12-10 00:10:47.504851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.504863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.504870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.504882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.504888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.504900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.504906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.504918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.504925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.504937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.504943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.504955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.392  [2024-12-10 00:10:47.504962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.504975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.392  [2024-12-10 00:10:47.504982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.504993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.392  [2024-12-10 00:10:47.505000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.505012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.392  [2024-12-10 00:10:47.505018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.505031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.392  [2024-12-10 00:10:47.505037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.506398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.392  [2024-12-10 00:10:47.506404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:29:34.392  [2024-12-10 00:10:47.507382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.392  [2024-12-10 00:10:47.507401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.393  [2024-12-10 00:10:47.507849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.393  [2024-12-10 00:10:47.507867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.393  [2024-12-10 00:10:47.507885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.393  [2024-12-10 00:10:47.507963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.393  [2024-12-10 00:10:47.507981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:29:34.393  [2024-12-10 00:10:47.507993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.393  [2024-12-10 00:10:47.507999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.508314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.508335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.508354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.508523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.508542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.508616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.508635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.508655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.508983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:38144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.508996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.509010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.509016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.509028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.509035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.509047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.509054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.509066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.509073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.509091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.509098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.509110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.509116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.509128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.394  [2024-12-10 00:10:47.509135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.509300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.509311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.509324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.394  [2024-12-10 00:10:47.509331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:29:34.394  [2024-12-10 00:10:47.509343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.509349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.509375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.509393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.509412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.509431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.509449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.509467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.509486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.509504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.509523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.395  [2024-12-10 00:10:47.509541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.509561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.509580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.395  [2024-12-10 00:10:47.509600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.509613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.509619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.510887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.510905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.510919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.395  [2024-12-10 00:10:47.510927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.510939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.510945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.510957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.510964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.510977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.510983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.510995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.511001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.511014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.395  [2024-12-10 00:10:47.511020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.511032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.511039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.511050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.511057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.511069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.395  [2024-12-10 00:10:47.511076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.511087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.395  [2024-12-10 00:10:47.511094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.511109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.395  [2024-12-10 00:10:47.511116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.511129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.395  [2024-12-10 00:10:47.511135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.511147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.395  [2024-12-10 00:10:47.511154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.511171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.511178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.511190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.511196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:29:34.395  [2024-12-10 00:10:47.511208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.395  [2024-12-10 00:10:47.511215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.511227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.396  [2024-12-10 00:10:47.511234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.511246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.396  [2024-12-10 00:10:47.511252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.511264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.396  [2024-12-10 00:10:47.511271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.511283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.396  [2024-12-10 00:10:47.511289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.511301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.511308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.396  [2024-12-10 00:10:47.512467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.396  [2024-12-10 00:10:47.512491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.396  [2024-12-10 00:10:47.512510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.512982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.512988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.513000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.513007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.513019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.396  [2024-12-10 00:10:47.513025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.513037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.396  [2024-12-10 00:10:47.513044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.514375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.396  [2024-12-10 00:10:47.514393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.514407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.396  [2024-12-10 00:10:47.514413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.514426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.396  [2024-12-10 00:10:47.514433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:29:34.396  [2024-12-10 00:10:47.514445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.514451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.514528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.514547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.514565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.514602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.514620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.514638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.514656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.514675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.514806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.514824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.514843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.514934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.514984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.514991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.515002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.397  [2024-12-10 00:10:47.515010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.515024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.397  [2024-12-10 00:10:47.515031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:29:34.397  [2024-12-10 00:10:47.515043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.398  [2024-12-10 00:10:47.515050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.515062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.398  [2024-12-10 00:10:47.515069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.515081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.398  [2024-12-10 00:10:47.515088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.398  [2024-12-10 00:10:47.517605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.398  [2024-12-10 00:10:47.517625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.398  [2024-12-10 00:10:47.517644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.398  [2024-12-10 00:10:47.517662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.398  [2024-12-10 00:10:47.517681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.398  [2024-12-10 00:10:47.517717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.398  [2024-12-10 00:10:47.517735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.398  [2024-12-10 00:10:47.517754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.398  [2024-12-10 00:10:47.517772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.398  [2024-12-10 00:10:47.517791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.398  [2024-12-10 00:10:47.517847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:29:34.398  [2024-12-10 00:10:47.517859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.517866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.517878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.517884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.517896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.517903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.517914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.517921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.517933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.517939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.517951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.517957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.517969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.517976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.517988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.517994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.518012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.518031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.518049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.518068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.518785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.518806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.518826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.518845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.518864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.518882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.518901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.518919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.399  [2024-12-10 00:10:47.518938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.518956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.518974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.518986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.518993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.519007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.519014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.519026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.519032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.519044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.519051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.519064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.526089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.526107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.526114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.526126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.526132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.526145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.526151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.526163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.399  [2024-12-10 00:10:47.526174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:29:34.399  [2024-12-10 00:10:47.526186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.526192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.526211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.526733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.526752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.526770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.526788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.526806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.526826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.526846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.526975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.526987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.526993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.527005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.527012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.527024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.527031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.527042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.527049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.527061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.527067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.527083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.400  [2024-12-10 00:10:47.527089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.527586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.527599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.527612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.400  [2024-12-10 00:10:47.527619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:29:34.400  [2024-12-10 00:10:47.527631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.527637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.527656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.527675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.527693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.527712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.527731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.527750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.527768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.527787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.527808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.401  [2024-12-10 00:10:47.527826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.401  [2024-12-10 00:10:47.527844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.401  [2024-12-10 00:10:47.527863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.401  [2024-12-10 00:10:47.527882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.401  [2024-12-10 00:10:47.527900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.401  [2024-12-10 00:10:47.527919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.527931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.401  [2024-12-10 00:10:47.527938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.528561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.528583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.528602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.528620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.528641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.401  [2024-12-10 00:10:47.528660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.401  [2024-12-10 00:10:47.528680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.401  [2024-12-10 00:10:47.528698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.528717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.528735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.528754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.401  [2024-12-10 00:10:47.528772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.401  [2024-12-10 00:10:47.528791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.401  [2024-12-10 00:10:47.528810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:29:34.401  [2024-12-10 00:10:47.528822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.528829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.529982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.529998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.402  [2024-12-10 00:10:47.530116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.402  [2024-12-10 00:10:47.530231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.402  [2024-12-10 00:10:47.530251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.402  [2024-12-10 00:10:47.530271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.402  [2024-12-10 00:10:47.530345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.402  [2024-12-10 00:10:47.530364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.402  [2024-12-10 00:10:47.530382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.402  [2024-12-10 00:10:47.530474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:29:34.402  [2024-12-10 00:10:47.530487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.402  [2024-12-10 00:10:47.530494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.530506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.530512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.530524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.530531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.530543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.530550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.530562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.530569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.531635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.531652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.531666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.531673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.531685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.531693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.531705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.531711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.531723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.531730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.531742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.531749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.531761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.531768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.531780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.531790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.533486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.533508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.533528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.533546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.533565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.533583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.533603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.533625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.533646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.533668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.533690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.533716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.533736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.533756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.533775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.533794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.533812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.403  [2024-12-10 00:10:47.533831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.533849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:29:34.403  [2024-12-10 00:10:47.533861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.403  [2024-12-10 00:10:47.533867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.533879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.404  [2024-12-10 00:10:47.533886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.533898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.533904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.533916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.533923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.533934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.404  [2024-12-10 00:10:47.533941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.533955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.404  [2024-12-10 00:10:47.533961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.533973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.404  [2024-12-10 00:10:47.533980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.533991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.404  [2024-12-10 00:10:47.533998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.404  [2024-12-10 00:10:47.534016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.404  [2024-12-10 00:10:47.534035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.404  [2024-12-10 00:10:47.534229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.404  [2024-12-10 00:10:47.534248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.534989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.534996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.535008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.535015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.535027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.404  [2024-12-10 00:10:47.535034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:29:34.404  [2024-12-10 00:10:47.535046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.404  [2024-12-10 00:10:47.535052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.405  [2024-12-10 00:10:47.536923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:29:34.405  [2024-12-10 00:10:47.536972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.405  [2024-12-10 00:10:47.536979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.536991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.536997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.406  [2024-12-10 00:10:47.537490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.406  [2024-12-10 00:10:47.537564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.406  [2024-12-10 00:10:47.537583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.406  [2024-12-10 00:10:47.537601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.406  [2024-12-10 00:10:47.537656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.537828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.537840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.406  [2024-12-10 00:10:47.537847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.539008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.406  [2024-12-10 00:10:47.539027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.539042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.406  [2024-12-10 00:10:47.539050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.539063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.406  [2024-12-10 00:10:47.539070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.539084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.406  [2024-12-10 00:10:47.539098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.539110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.406  [2024-12-10 00:10:47.539117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.539132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.406  [2024-12-10 00:10:47.539139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.539151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.406  [2024-12-10 00:10:47.539157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.539177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.539185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.539197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.539204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:29:34.406  [2024-12-10 00:10:47.539216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.406  [2024-12-10 00:10:47.539223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.407  [2024-12-10 00:10:47.539301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.407  [2024-12-10 00:10:47.539320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.407  [2024-12-10 00:10:47.539340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.407  [2024-12-10 00:10:47.539433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.407  [2024-12-10 00:10:47.539452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.407  [2024-12-10 00:10:47.539507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.407  [2024-12-10 00:10:47.539525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.407  [2024-12-10 00:10:47.539563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.407  [2024-12-10 00:10:47.539583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.407  [2024-12-10 00:10:47.539659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.407  [2024-12-10 00:10:47.539697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.407  [2024-12-10 00:10:47.539716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.539728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.407  [2024-12-10 00:10:47.539735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:29:34.407  [2024-12-10 00:10:47.540391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.407  [2024-12-10 00:10:47.540407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.540422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.540429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.540441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.540448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.540463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.540469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.540481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.540488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.540500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.540507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.540519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.540525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.540537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.540543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.540555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.540562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.540574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.540581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.540592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.540599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.540611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.540617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.540630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.540636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.542042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.542063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:40640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.542085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.542111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.542131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.542149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.542174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.542193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.542211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.542231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.542249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.542268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.542287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.542305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.542325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.542344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.542363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.408  [2024-12-10 00:10:47.542381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.542400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:29:34.408  [2024-12-10 00:10:47.542412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.408  [2024-12-10 00:10:47.542418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.542437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.409  [2024-12-10 00:10:47.542455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.542473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.542492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.409  [2024-12-10 00:10:47.542511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.409  [2024-12-10 00:10:47.542529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.409  [2024-12-10 00:10:47.542547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.542567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.542586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.542604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.542623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.542641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.409  [2024-12-10 00:10:47.542659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.542678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:40512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.542696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.409  [2024-12-10 00:10:47.542714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.542733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.542751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.409  [2024-12-10 00:10:47.542769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.409  [2024-12-10 00:10:47.542789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.409  [2024-12-10 00:10:47.542807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.409  [2024-12-10 00:10:47.542826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.409  [2024-12-10 00:10:47.542844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.542856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.409  [2024-12-10 00:10:47.542863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.544005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:40400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.409  [2024-12-10 00:10:47.544022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.544036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.544043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.544055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.544062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.544075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.544081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.544093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.544100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.544112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.544119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.544131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.544137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:29:34.409  [2024-12-10 00:10:47.544150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.409  [2024-12-10 00:10:47.544159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.410  [2024-12-10 00:10:47.545689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:29:34.410  [2024-12-10 00:10:47.545757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.410  [2024-12-10 00:10:47.545763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.545775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.411  [2024-12-10 00:10:47.545782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.545794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.411  [2024-12-10 00:10:47.545801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.546308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.546323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.546336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.546343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.546360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.546367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.546379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.411  [2024-12-10 00:10:47.546386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.546398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.411  [2024-12-10 00:10:47.546405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.546417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.411  [2024-12-10 00:10:47.546424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.546436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.411  [2024-12-10 00:10:47.546443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.546455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.411  [2024-12-10 00:10:47.546462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.546474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.411  [2024-12-10 00:10:47.546481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.546493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.546499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.546512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.546518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.546530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.546537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.546549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.546556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.547449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.547474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.547493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.547512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.547531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.547550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.547569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.547587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.547606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.547625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.547644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.411  [2024-12-10 00:10:47.547663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.411  [2024-12-10 00:10:47.547681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.411  [2024-12-10 00:10:47.547703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.411  [2024-12-10 00:10:47.547721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.411  [2024-12-10 00:10:47.547740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:29:34.411  [2024-12-10 00:10:47.547752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.412  [2024-12-10 00:10:47.547759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.547771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.412  [2024-12-10 00:10:47.547778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.547789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.412  [2024-12-10 00:10:47.547796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.547809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.547815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.547828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.547835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.548823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:40688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.412  [2024-12-10 00:10:47.548839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.548854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.548861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.548873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:40528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.412  [2024-12-10 00:10:47.548880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.548892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.412  [2024-12-10 00:10:47.548899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.548911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.548921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.548933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.548940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.548952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.412  [2024-12-10 00:10:47.548958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.548970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.412  [2024-12-10 00:10:47.548977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.548989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.412  [2024-12-10 00:10:47.548996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.549014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.549033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.549052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.549071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.549090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.549109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.412  [2024-12-10 00:10:47.549127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.549146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.549316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.549336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.412  [2024-12-10 00:10:47.549355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.412  [2024-12-10 00:10:47.549373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.549392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.412  [2024-12-10 00:10:47.549411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.549429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:29:34.412  [2024-12-10 00:10:47.549441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.412  [2024-12-10 00:10:47.549448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.549460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.413  [2024-12-10 00:10:47.549467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.549479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.413  [2024-12-10 00:10:47.549486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.549498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.549505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.549517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.549524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.549538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.549545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.549557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.549564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.549576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.413  [2024-12-10 00:10:47.549582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.550129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.413  [2024-12-10 00:10:47.550144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.550158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.550172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.550184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.550191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.550203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.550211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.550223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.550229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.550241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.550248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.550260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.550267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.550279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.413  [2024-12-10 00:10:47.550286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.550297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.413  [2024-12-10 00:10:47.550304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.550316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.550326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.550339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.413  [2024-12-10 00:10:47.550345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.551504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.551526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.551545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.551565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.551583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.551602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.551621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.551640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.413  [2024-12-10 00:10:47.551660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.413  [2024-12-10 00:10:47.551678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.413  [2024-12-10 00:10:47.551699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.413  [2024-12-10 00:10:47.551718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.413  [2024-12-10 00:10:47.551737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:29:34.413  [2024-12-10 00:10:47.551749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.551756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.551768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.551774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.551786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.551793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.551805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.551813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.551824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.551831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.551843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.414  [2024-12-10 00:10:47.551849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.551862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.551868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.551880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.414  [2024-12-10 00:10:47.551887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.551898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.551905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.551917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.551924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.551938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.551944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.551957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.414  [2024-12-10 00:10:47.551963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.551975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.551982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.551994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.414  [2024-12-10 00:10:47.552001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.552019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.552038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.552056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.414  [2024-12-10 00:10:47.552075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.414  [2024-12-10 00:10:47.552094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.552113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.414  [2024-12-10 00:10:47.552406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.552426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.552449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.552467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.414  [2024-12-10 00:10:47.552486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.414  [2024-12-10 00:10:47.552505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.414  [2024-12-10 00:10:47.552524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.552542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.414  [2024-12-10 00:10:47.552560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.552580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.552598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.414  [2024-12-10 00:10:47.552617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:29:34.414  [2024-12-10 00:10:47.552629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.552636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.552648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.552654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.552666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.552675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.552687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.552694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.552705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:41480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.552712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.552724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.552731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.552743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:41512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.552750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.552762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.552769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.552781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.552788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.552800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.552806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.552819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.552826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.552838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.552845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.552857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.552863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.552876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.552882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.554315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.554337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.554356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.554375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.554395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.554413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.554432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.554451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.554470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.554488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.554508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.554527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.554545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.554566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.415  [2024-12-10 00:10:47.554585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.554604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000c p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.415  [2024-12-10 00:10:47.554914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:29:34.415  [2024-12-10 00:10:47.554928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.554935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000e p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.554947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.554954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.554966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.554974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.554985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.554993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.555011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.555030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.555050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.555068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.555090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.555109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.555128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.555146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.555171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.555190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.555209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.555227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.555246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.555265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.555283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.555302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.555315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.555323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.556180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:40608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.556196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.556210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.416  [2024-12-10 00:10:47.556218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.556230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.556237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.556249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.556256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.556267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.556274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.556286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.556293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.556306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.556313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.556325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.556332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.556343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.556350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.556362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.556369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.556381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.416  [2024-12-10 00:10:47.556388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:29:34.416  [2024-12-10 00:10:47.556400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.556409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.556422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.556428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.556440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.556447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.556459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.556466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.556477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.556484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.556497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.556503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.556515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.556522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.556534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.556541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.556553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.556560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.556572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.556579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.557023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.557037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.557051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.557058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.557070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.557078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.557092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.557099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.557111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.557118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.557130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.557137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.557149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:41096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.557155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.557174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.557181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.557193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.557200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0040 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.557212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.557219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.557231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.557237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.557250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.557256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.558477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.558495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.558518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.558526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.558538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.558545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.558561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.417  [2024-12-10 00:10:47.558569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.558581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.558588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.558600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.558607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.558619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.558626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.558638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.558645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.558657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.558664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.558677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.558683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:29:34.417  [2024-12-10 00:10:47.558695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.417  [2024-12-10 00:10:47.558702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.558722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.558740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.558759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.418  [2024-12-10 00:10:47.558778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.418  [2024-12-10 00:10:47.558798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.418  [2024-12-10 00:10:47.558817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.418  [2024-12-10 00:10:47.558836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.558855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.418  [2024-12-10 00:10:47.558873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.558892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.558911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.558930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.558949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.558968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005c p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.558986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.558999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.559006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.559018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.559030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.559042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.418  [2024-12-10 00:10:47.559049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.559061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.559068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.559080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.559087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.559099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.559105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.559118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:41496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.418  [2024-12-10 00:10:47.559125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.559137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:40888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.559144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.559570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.559584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.559598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.418  [2024-12-10 00:10:47.559605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.559618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.418  [2024-12-10 00:10:47.559625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.559637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.559645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.559657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.418  [2024-12-10 00:10:47.559663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:29:34.418  [2024-12-10 00:10:47.559675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.419  [2024-12-10 00:10:47.559685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:29:34.419  [2024-12-10 00:10:47.559698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.419  [2024-12-10 00:10:47.559705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:29:34.419  [2024-12-10 00:10:47.559717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.419  [2024-12-10 00:10:47.559724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:29:34.419  [2024-12-10 00:10:47.559736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.419  [2024-12-10 00:10:47.559743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:29:34.419  [2024-12-10 00:10:47.559755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.419  [2024-12-10 00:10:47.559762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:29:34.419  [2024-12-10 00:10:47.559774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:29:34.419  [2024-12-10 00:10:47.559780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0
00:29:34.419  [2024-12-10 00:10:47.559792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.419  [2024-12-10 00:10:47.559799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0071 p:0 m:0 dnr:0
00:29:34.419  [2024-12-10 00:10:47.559811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.419  [2024-12-10 00:10:47.559818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:29:34.419  [2024-12-10 00:10:47.559831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.419  [2024-12-10 00:10:47.559838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:29:34.419  [2024-12-10 00:10:47.559850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:29:34.419  [2024-12-10 00:10:47.559857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0
00:29:34.419      10638.26 IOPS,    41.56 MiB/s
[2024-12-09T23:10:50.276Z]     10668.46 IOPS,    41.67 MiB/s
[2024-12-09T23:10:50.276Z] Received shutdown signal, test time was about 28.979883 seconds
00:29:34.419  
00:29:34.419                                                                                                  Latency(us)
00:29:34.419  
[2024-12-09T23:10:50.276Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:29:34.419  Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096)
00:29:34.419  	 Verification LBA range: start 0x0 length 0x4000
00:29:34.419  	 Nvme0n1             :      28.98   10691.38      41.76       0.00     0.00   11952.69     628.05 3019898.88
00:29:34.419  
[2024-12-09T23:10:50.276Z]  ===================================================================================================================
00:29:34.419  
[2024-12-09T23:10:50.276Z]  Total                       :              10691.38      41.76       0.00     0.00   11952.69     628.05 3019898.88
00:29:34.419   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:29:34.419   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT
00:29:34.419   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt
00:29:34.419   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini
00:29:34.419   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup
00:29:34.419   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync
00:29:34.419   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:29:34.419   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e
00:29:34.419   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20}
00:29:34.419   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:29:34.419  rmmod nvme_tcp
00:29:34.744  rmmod nvme_fabrics
00:29:34.744  rmmod nvme_keyring
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3200617 ']'
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3200617
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3200617 ']'
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3200617
00:29:34.744    00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:34.744    00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3200617
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3200617'
00:29:34.744  killing process with pid 3200617
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3200617
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3200617
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:34.744   00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:34.744    00:10:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:37.316   00:10:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:29:37.316  
00:29:37.316  real	0m41.443s
00:29:37.316  user	1m52.187s
00:29:37.316  sys	0m11.721s
00:29:37.316   00:10:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable
00:29:37.316   00:10:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x
00:29:37.316  ************************************
00:29:37.316  END TEST nvmf_host_multipath_status
00:29:37.316  ************************************
00:29:37.316   00:10:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp
00:29:37.316   00:10:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:29:37.316   00:10:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:29:37.316   00:10:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:29:37.316  ************************************
00:29:37.316  START TEST nvmf_discovery_remove_ifc
00:29:37.316  ************************************
00:29:37.316   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp
00:29:37.316  * Looking for test storage...
00:29:37.316  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:29:37.316     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version
00:29:37.316     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-:
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-:
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<'
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 ))
00:29:37.316    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:29:37.317  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:37.317  		--rc genhtml_branch_coverage=1
00:29:37.317  		--rc genhtml_function_coverage=1
00:29:37.317  		--rc genhtml_legend=1
00:29:37.317  		--rc geninfo_all_blocks=1
00:29:37.317  		--rc geninfo_unexecuted_blocks=1
00:29:37.317  		
00:29:37.317  		'
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:29:37.317  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:37.317  		--rc genhtml_branch_coverage=1
00:29:37.317  		--rc genhtml_function_coverage=1
00:29:37.317  		--rc genhtml_legend=1
00:29:37.317  		--rc geninfo_all_blocks=1
00:29:37.317  		--rc geninfo_unexecuted_blocks=1
00:29:37.317  		
00:29:37.317  		'
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:29:37.317  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:37.317  		--rc genhtml_branch_coverage=1
00:29:37.317  		--rc genhtml_function_coverage=1
00:29:37.317  		--rc genhtml_legend=1
00:29:37.317  		--rc geninfo_all_blocks=1
00:29:37.317  		--rc geninfo_unexecuted_blocks=1
00:29:37.317  		
00:29:37.317  		'
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:29:37.317  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:37.317  		--rc genhtml_branch_coverage=1
00:29:37.317  		--rc genhtml_function_coverage=1
00:29:37.317  		--rc genhtml_legend=1
00:29:37.317  		--rc geninfo_all_blocks=1
00:29:37.317  		--rc geninfo_unexecuted_blocks=1
00:29:37.317  		
00:29:37.317  		'
00:29:37.317   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:29:37.317     00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:29:37.317      00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:37.317      00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:37.317      00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:37.317      00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH
00:29:37.317      00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:29:37.317  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:29:37.317    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:29:37.318    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']'
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:37.318    00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable
00:29:37.318   00:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=()
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=()
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=()
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=()
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=()
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=()
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=()
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx
00:29:42.593   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:29:42.594   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:29:42.594   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:29:42.594   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:29:42.594   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:29:42.594   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:29:42.853  Found 0000:af:00.0 (0x8086 - 0x159b)
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:29:42.853  Found 0000:af:00.1 (0x8086 - 0x159b)
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]]
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:42.853   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:29:42.854  Found net devices under 0000:af:00.0: cvl_0_0
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]]
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:29:42.854  Found net devices under 0000:af:00.1: cvl_0_1
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:29:42.854   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:29:43.115   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:29:43.115   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:29:43.115   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:29:43.115   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:29:43.115  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:29:43.115  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms
00:29:43.115  
00:29:43.115  --- 10.0.0.2 ping statistics ---
00:29:43.116  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:43.116  rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:29:43.116  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:29:43.116  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms
00:29:43.116  
00:29:43.116  --- 10.0.0.1 ping statistics ---
00:29:43.116  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:29:43.116  rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3209552
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3209552
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3209552 ']'
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:29:43.116  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:43.116   00:10:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:43.116  [2024-12-10 00:10:58.867279] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:29:43.116  [2024-12-10 00:10:58.867322] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:29:43.116  [2024-12-10 00:10:58.946587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:43.377  [2024-12-10 00:10:58.986634] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:29:43.377  [2024-12-10 00:10:58.986664] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:29:43.377  [2024-12-10 00:10:58.986674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:29:43.377  [2024-12-10 00:10:58.986680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:29:43.377  [2024-12-10 00:10:58.986685] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:29:43.377  [2024-12-10 00:10:58.987186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:43.377  [2024-12-10 00:10:59.129294] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:29:43.377  [2024-12-10 00:10:59.137455] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 ***
00:29:43.377  null0
00:29:43.377  [2024-12-10 00:10:59.169451] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3209670
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3209670 /tmp/host.sock
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3209670 ']'
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...'
00:29:43.377  Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable
00:29:43.377   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:43.636  [2024-12-10 00:10:59.239101] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:29:43.636  [2024-12-10 00:10:59.239151] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3209670 ]
00:29:43.636  [2024-12-10 00:10:59.312416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:29:43.636  [2024-12-10 00:10:59.353597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:43.637   00:10:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:45.011  [2024-12-10 00:11:00.527320] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:29:45.011  [2024-12-10 00:11:00.527345] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:29:45.011  [2024-12-10 00:11:00.527363] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:29:45.011  [2024-12-10 00:11:00.613634] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0
00:29:45.011  [2024-12-10 00:11:00.836773] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420
00:29:45.011  [2024-12-10 00:11:00.837549] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xd27b50:1 started.
00:29:45.011  [2024-12-10 00:11:00.838853] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0
00:29:45.011  [2024-12-10 00:11:00.838890] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0
00:29:45.011  [2024-12-10 00:11:00.838908] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0
00:29:45.011  [2024-12-10 00:11:00.838921] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done
00:29:45.011  [2024-12-10 00:11:00.838940] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:29:45.011   00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:45.011   00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1
00:29:45.011    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:29:45.011    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:45.011    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:45.011    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:45.011  [2024-12-10 00:11:00.844985] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xd27b50 was disconnected and freed. delete nvme_qpair.
00:29:45.012    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:29:45.012    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:29:45.012    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:29:45.012    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:45.270   00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]]
00:29:45.270   00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0
00:29:45.270   00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down
00:29:45.270   00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev ''
00:29:45.270    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:29:45.270    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:45.270    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:29:45.270    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:45.270    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:29:45.270    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:45.271    00:11:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:29:45.271    00:11:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:45.271   00:11:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:29:45.271   00:11:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:29:46.208    00:11:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:29:46.208    00:11:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:46.208    00:11:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:29:46.208    00:11:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:46.208    00:11:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:29:46.208    00:11:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:46.208    00:11:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:29:46.208    00:11:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:46.466   00:11:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:29:46.466   00:11:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:29:47.404    00:11:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:29:47.404    00:11:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:47.404    00:11:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:29:47.404    00:11:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:29:47.404    00:11:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:47.404    00:11:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:47.404    00:11:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:29:47.404    00:11:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:47.404   00:11:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:29:47.404   00:11:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:29:48.341    00:11:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:29:48.341    00:11:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:48.341    00:11:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:29:48.341    00:11:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:48.342    00:11:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:29:48.342    00:11:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:48.342    00:11:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:29:48.342    00:11:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:48.342   00:11:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:29:48.342   00:11:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:29:49.724    00:11:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:29:49.724    00:11:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:49.724    00:11:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:29:49.724    00:11:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:49.724    00:11:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:29:49.724    00:11:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:49.724    00:11:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:29:49.724    00:11:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:49.724   00:11:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:29:49.724   00:11:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:29:50.661    00:11:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:29:50.661    00:11:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:50.661    00:11:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:29:50.661    00:11:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:50.661    00:11:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:29:50.661    00:11:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:50.661    00:11:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:29:50.661    00:11:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:50.661  [2024-12-10 00:11:06.280384] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out
00:29:50.661  [2024-12-10 00:11:06.280422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:50.661  [2024-12-10 00:11:06.280431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:50.661  [2024-12-10 00:11:06.280457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:50.661  [2024-12-10 00:11:06.280464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:50.661  [2024-12-10 00:11:06.280471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:50.661  [2024-12-10 00:11:06.280478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:50.661  [2024-12-10 00:11:06.280489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:50.661  [2024-12-10 00:11:06.280496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:50.661  [2024-12-10 00:11:06.280503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:50.661  [2024-12-10 00:11:06.280509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:50.661  [2024-12-10 00:11:06.280516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd04310 is same with the state(6) to be set
00:29:50.661   00:11:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:29:50.661   00:11:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:29:50.661  [2024-12-10 00:11:06.290408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd04310 (9): Bad file descriptor
00:29:50.661  [2024-12-10 00:11:06.300442] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset.
00:29:50.661  [2024-12-10 00:11:06.300452] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted.
00:29:50.661  [2024-12-10 00:11:06.300458] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:50.661  [2024-12-10 00:11:06.300466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:50.661  [2024-12-10 00:11:06.300482] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:51.597    00:11:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:29:51.597    00:11:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:51.597    00:11:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:29:51.597    00:11:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:51.597    00:11:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:29:51.597    00:11:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:51.597    00:11:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:29:51.597  [2024-12-10 00:11:07.304212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110
00:29:51.597  [2024-12-10 00:11:07.304289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd04310 with addr=10.0.0.2, port=4420
00:29:51.597  [2024-12-10 00:11:07.304320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd04310 is same with the state(6) to be set
00:29:51.597  [2024-12-10 00:11:07.304375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd04310 (9): Bad file descriptor
00:29:51.597  [2024-12-10 00:11:07.305325] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress.
00:29:51.597  [2024-12-10 00:11:07.305388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:51.597  [2024-12-10 00:11:07.305411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:51.597  [2024-12-10 00:11:07.305434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:51.597  [2024-12-10 00:11:07.305454] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:51.597  [2024-12-10 00:11:07.305469] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:51.597  [2024-12-10 00:11:07.305491] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:51.597  [2024-12-10 00:11:07.305514] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr.
00:29:51.597  [2024-12-10 00:11:07.305528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:29:51.597    00:11:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:51.597   00:11:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]]
00:29:51.597   00:11:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:29:52.540  [2024-12-10 00:11:08.308036] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr.
00:29:52.540  [2024-12-10 00:11:08.308057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:29:52.540  [2024-12-10 00:11:08.308067] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:29:52.540  [2024-12-10 00:11:08.308074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:29:52.540  [2024-12-10 00:11:08.308081] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state
00:29:52.540  [2024-12-10 00:11:08.308087] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected.
00:29:52.540  [2024-12-10 00:11:08.308091] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets.
00:29:52.540  [2024-12-10 00:11:08.308095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:29:52.540  [2024-12-10 00:11:08.308114] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420
00:29:52.540  [2024-12-10 00:11:08.308132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:52.540  [2024-12-10 00:11:08.308141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:52.540  [2024-12-10 00:11:08.308150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:52.540  [2024-12-10 00:11:08.308157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:52.540  [2024-12-10 00:11:08.308163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:52.540  [2024-12-10 00:11:08.308173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:52.540  [2024-12-10 00:11:08.308180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:52.540  [2024-12-10 00:11:08.308186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:52.540  [2024-12-10 00:11:08.308193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 
00:29:52.540  [2024-12-10 00:11:08.308199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:29:52.540  [2024-12-10 00:11:08.308205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state.
00:29:52.540  [2024-12-10 00:11:08.308551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf3a60 (9): Bad file descriptor
00:29:52.540  [2024-12-10 00:11:08.309561] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command
00:29:52.540  [2024-12-10 00:11:08.309575] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register
00:29:52.540    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:29:52.540    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:52.540    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:29:52.540    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:52.540    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:29:52.540    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:52.540    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:29:52.540    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:52.802   00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]]
00:29:52.802   00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:29:52.802   00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:29:52.802   00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1
00:29:52.802    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:29:52.802    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:52.802    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:29:52.802    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:52.802    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:29:52.802    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:52.802    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:29:52.802    00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:52.802   00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]]
00:29:52.802   00:11:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:29:53.739    00:11:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:29:53.739    00:11:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:53.739    00:11:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:29:53.739    00:11:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:53.739    00:11:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:29:53.739    00:11:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:53.739    00:11:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:29:53.739    00:11:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:53.996   00:11:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]]
00:29:53.996   00:11:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:29:54.562  [2024-12-10 00:11:10.361559] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached
00:29:54.562  [2024-12-10 00:11:10.361581] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected
00:29:54.562  [2024-12-10 00:11:10.361594] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command
00:29:54.821  [2024-12-10 00:11:10.488975] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1
00:29:54.822    00:11:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:29:54.822    00:11:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:54.822    00:11:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:29:54.822    00:11:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:54.822    00:11:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:29:54.822    00:11:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:54.822    00:11:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:29:54.822    00:11:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:54.822   00:11:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]]
00:29:54.822   00:11:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1
00:29:55.080  [2024-12-10 00:11:10.710998] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420
00:29:55.080  [2024-12-10 00:11:10.711531] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xd06650:1 started.
00:29:55.080  [2024-12-10 00:11:10.712548] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0
00:29:55.080  [2024-12-10 00:11:10.712577] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0
00:29:55.080  [2024-12-10 00:11:10.712594] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0
00:29:55.080  [2024-12-10 00:11:10.712607] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done
00:29:55.081  [2024-12-10 00:11:10.712614] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again
00:29:55.081  [2024-12-10 00:11:10.720095] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xd06650 was disconnected and freed. delete nvme_qpair.
00:29:56.017    00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list
00:29:56.017    00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs
00:29:56.017    00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name'
00:29:56.017    00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable
00:29:56.017    00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort
00:29:56.017    00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:56.017    00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs
00:29:56.017    00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:29:56.017   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]]
00:29:56.017   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT
00:29:56.017   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3209670
00:29:56.017   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3209670 ']'
00:29:56.017   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3209670
00:29:56.017    00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname
00:29:56.017   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:56.017    00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3209670
00:29:56.017   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:29:56.017   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:29:56.017   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3209670'
00:29:56.017  killing process with pid 3209670
00:29:56.017   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3209670
00:29:56.017   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3209670
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20}
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:29:56.275  rmmod nvme_tcp
00:29:56.275  rmmod nvme_fabrics
00:29:56.275  rmmod nvme_keyring
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3209552 ']'
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3209552
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3209552 ']'
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3209552
00:29:56.275    00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname
00:29:56.275   00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:29:56.275    00:11:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3209552
00:29:56.275   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:29:56.275   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:29:56.275   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3209552'
00:29:56.275  killing process with pid 3209552
00:29:56.275   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3209552
00:29:56.275   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3209552
00:29:56.534   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:29:56.534   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:29:56.534   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:29:56.534   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr
00:29:56.534   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save
00:29:56.534   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:29:56.534   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore
00:29:56.534   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:29:56.534   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns
00:29:56.534   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:56.534   00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:56.534    00:11:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:58.439   00:11:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:29:58.439  
00:29:58.439  real	0m21.623s
00:29:58.439  user	0m26.986s
00:29:58.439  sys	0m5.850s
00:29:58.439   00:11:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:29:58.439   00:11:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x
00:29:58.439  ************************************
00:29:58.439  END TEST nvmf_discovery_remove_ifc
00:29:58.439  ************************************
00:29:58.699   00:11:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp
00:29:58.699   00:11:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:29:58.699   00:11:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:29:58.699   00:11:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:29:58.699  ************************************
00:29:58.699  START TEST nvmf_identify_kernel_target
00:29:58.699  ************************************
00:29:58.699   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp
00:29:58.699  * Looking for test storage...
00:29:58.699  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:29:58.699    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:29:58.699     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version
00:29:58.699     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:29:58.699    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-:
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-:
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<'
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:29:58.700  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:58.700  		--rc genhtml_branch_coverage=1
00:29:58.700  		--rc genhtml_function_coverage=1
00:29:58.700  		--rc genhtml_legend=1
00:29:58.700  		--rc geninfo_all_blocks=1
00:29:58.700  		--rc geninfo_unexecuted_blocks=1
00:29:58.700  		
00:29:58.700  		'
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:29:58.700  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:58.700  		--rc genhtml_branch_coverage=1
00:29:58.700  		--rc genhtml_function_coverage=1
00:29:58.700  		--rc genhtml_legend=1
00:29:58.700  		--rc geninfo_all_blocks=1
00:29:58.700  		--rc geninfo_unexecuted_blocks=1
00:29:58.700  		
00:29:58.700  		'
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:29:58.700  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:58.700  		--rc genhtml_branch_coverage=1
00:29:58.700  		--rc genhtml_function_coverage=1
00:29:58.700  		--rc genhtml_legend=1
00:29:58.700  		--rc geninfo_all_blocks=1
00:29:58.700  		--rc geninfo_unexecuted_blocks=1
00:29:58.700  		
00:29:58.700  		'
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:29:58.700  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:29:58.700  		--rc genhtml_branch_coverage=1
00:29:58.700  		--rc genhtml_function_coverage=1
00:29:58.700  		--rc genhtml_legend=1
00:29:58.700  		--rc geninfo_all_blocks=1
00:29:58.700  		--rc geninfo_unexecuted_blocks=1
00:29:58.700  		
00:29:58.700  		'
00:29:58.700   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:29:58.700    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:29:58.700     00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:29:58.700      00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:58.700      00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:58.700      00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:58.700      00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH
00:29:58.958      00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:29:58.958    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0
00:29:58.958    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:29:58.958    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:29:58.958    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:29:58.958    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:29:58.958    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:29:58.958    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:29:58.958  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:29:58.958    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:29:58.958    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:29:58.958    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:29:58.958   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit
00:29:58.959   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:29:58.959   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:29:58.959   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:29:58.959   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:29:58.959   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:29:58.959   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:29:58.959   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:29:58.959    00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:29:58.959   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:29:58.959   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:29:58.959   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable
00:29:58.959   00:11:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=()
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=()
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=()
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=()
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=()
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=()
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=()
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:30:05.531  Found 0000:af:00.0 (0x8086 - 0x159b)
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:30:05.531  Found 0000:af:00.1 (0x8086 - 0x159b)
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:30:05.531  Found net devices under 0000:af:00.0: cvl_0_0
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:30:05.531  Found net devices under 0000:af:00.1: cvl_0_1
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:30:05.531   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:30:05.532  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:30:05.532  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms
00:30:05.532  
00:30:05.532  --- 10.0.0.2 ping statistics ---
00:30:05.532  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:05.532  rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:30:05.532  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:30:05.532  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms
00:30:05.532  
00:30:05.532  --- 10.0.0.1 ping statistics ---
00:30:05.532  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:05.532  rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT
00:30:05.532    00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip
00:30:05.532    00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip
00:30:05.532    00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:05.532    00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:05.532    00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:05.532    00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:05.532    00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:05.532    00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:05.532    00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:05.532    00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:05.532    00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]]
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]]
00:30:05.532   00:11:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:30:07.438  Waiting for block devices as requested
00:30:07.438  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:30:07.438  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:30:07.696  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:30:07.696  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:30:07.696  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:30:07.956  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:30:07.956  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:30:07.956  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:30:08.216  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:30:08.216  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:30:08.216  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:30:08.216  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:30:08.475  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:30:08.475  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:30:08.475  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:30:08.734  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:30:08.734  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:30:08.734   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:30:08.734   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]]
00:30:08.734   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1
00:30:08.734   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:30:08.734   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:30:08.734   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:30:08.734   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1
00:30:08.734   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt
00:30:08.734   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1
00:30:08.734  No valid GPT data, bailing
00:30:08.734    00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:30:08.734   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt=
00:30:08.994   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1
00:30:08.994   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1
00:30:08.994   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]]
00:30:08.994   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:30:08.995   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:30:08.995   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1
00:30:08.995   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn
00:30:08.995   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1
00:30:08.995   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1
00:30:08.995   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1
00:30:08.995   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1
00:30:08.995   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp
00:30:08.995   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420
00:30:08.995   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4
00:30:08.995   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/
00:30:08.995   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420
00:30:08.995  
00:30:08.995  Discovery Log Number of Records 2, Generation counter 2
00:30:08.995  =====Discovery Log Entry 0======
00:30:08.995  trtype:  tcp
00:30:08.995  adrfam:  ipv4
00:30:08.995  subtype: current discovery subsystem
00:30:08.995  treq:    not specified, sq flow control disable supported
00:30:08.995  portid:  1
00:30:08.995  trsvcid: 4420
00:30:08.995  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:30:08.995  traddr:  10.0.0.1
00:30:08.995  eflags:  none
00:30:08.995  sectype: none
00:30:08.995  =====Discovery Log Entry 1======
00:30:08.995  trtype:  tcp
00:30:08.995  adrfam:  ipv4
00:30:08.995  subtype: nvme subsystem
00:30:08.995  treq:    not specified, sq flow control disable supported
00:30:08.995  portid:  1
00:30:08.995  trsvcid: 4420
00:30:08.995  subnqn:  nqn.2016-06.io.spdk:testnqn
00:30:08.995  traddr:  10.0.0.1
00:30:08.995  eflags:  none
00:30:08.995  sectype: none
00:30:08.995   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '	trtype:tcp 	adrfam:IPv4 	traddr:10.0.0.1
00:30:08.995  	trsvcid:4420 	subnqn:nqn.2014-08.org.nvmexpress.discovery'
00:30:08.995  =====================================================
00:30:08.995  NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery
00:30:08.995  =====================================================
00:30:08.995  Controller Capabilities/Features
00:30:08.995  ================================
00:30:08.995  Vendor ID:                             0000
00:30:08.995  Subsystem Vendor ID:                   0000
00:30:08.995  Serial Number:                         1cba5bd4b04f2a7a3cac
00:30:08.995  Model Number:                          Linux
00:30:08.995  Firmware Version:                      6.8.9-20
00:30:08.995  Recommended Arb Burst:                 0
00:30:08.995  IEEE OUI Identifier:                   00 00 00
00:30:08.995  Multi-path I/O
00:30:08.995    May have multiple subsystem ports:   No
00:30:08.995    May have multiple controllers:       No
00:30:08.995    Associated with SR-IOV VF:           No
00:30:08.995  Max Data Transfer Size:                Unlimited
00:30:08.995  Max Number of Namespaces:              0
00:30:08.995  Max Number of I/O Queues:              1024
00:30:08.995  NVMe Specification Version (VS):       1.3
00:30:08.995  NVMe Specification Version (Identify): 1.3
00:30:08.995  Maximum Queue Entries:                 1024
00:30:08.995  Contiguous Queues Required:            No
00:30:08.995  Arbitration Mechanisms Supported
00:30:08.995    Weighted Round Robin:                Not Supported
00:30:08.995    Vendor Specific:                     Not Supported
00:30:08.995  Reset Timeout:                         7500 ms
00:30:08.995  Doorbell Stride:                       4 bytes
00:30:08.995  NVM Subsystem Reset:                   Not Supported
00:30:08.995  Command Sets Supported
00:30:08.995    NVM Command Set:                     Supported
00:30:08.995  Boot Partition:                        Not Supported
00:30:08.995  Memory Page Size Minimum:              4096 bytes
00:30:08.995  Memory Page Size Maximum:              4096 bytes
00:30:08.995  Persistent Memory Region:              Not Supported
00:30:08.995  Optional Asynchronous Events Supported
00:30:08.995    Namespace Attribute Notices:         Not Supported
00:30:08.995    Firmware Activation Notices:         Not Supported
00:30:08.995    ANA Change Notices:                  Not Supported
00:30:08.995    PLE Aggregate Log Change Notices:    Not Supported
00:30:08.995    LBA Status Info Alert Notices:       Not Supported
00:30:08.995    EGE Aggregate Log Change Notices:    Not Supported
00:30:08.995    Normal NVM Subsystem Shutdown event: Not Supported
00:30:08.995    Zone Descriptor Change Notices:      Not Supported
00:30:08.995    Discovery Log Change Notices:        Supported
00:30:08.995  Controller Attributes
00:30:08.995    128-bit Host Identifier:             Not Supported
00:30:08.995    Non-Operational Permissive Mode:     Not Supported
00:30:08.995    NVM Sets:                            Not Supported
00:30:08.995    Read Recovery Levels:                Not Supported
00:30:08.995    Endurance Groups:                    Not Supported
00:30:08.995    Predictable Latency Mode:            Not Supported
00:30:08.995    Traffic Based Keep ALive:            Not Supported
00:30:08.995    Namespace Granularity:               Not Supported
00:30:08.995    SQ Associations:                     Not Supported
00:30:08.995    UUID List:                           Not Supported
00:30:08.995    Multi-Domain Subsystem:              Not Supported
00:30:08.995    Fixed Capacity Management:           Not Supported
00:30:08.995    Variable Capacity Management:        Not Supported
00:30:08.995    Delete Endurance Group:              Not Supported
00:30:08.995    Delete NVM Set:                      Not Supported
00:30:08.995    Extended LBA Formats Supported:      Not Supported
00:30:08.995    Flexible Data Placement Supported:   Not Supported
00:30:08.995  
00:30:08.995  Controller Memory Buffer Support
00:30:08.995  ================================
00:30:08.995  Supported:                             No
00:30:08.995  
00:30:08.995  Persistent Memory Region Support
00:30:08.995  ================================
00:30:08.995  Supported:                             No
00:30:08.995  
00:30:08.995  Admin Command Set Attributes
00:30:08.995  ============================
00:30:08.995  Security Send/Receive:                 Not Supported
00:30:08.995  Format NVM:                            Not Supported
00:30:08.995  Firmware Activate/Download:            Not Supported
00:30:08.995  Namespace Management:                  Not Supported
00:30:08.995  Device Self-Test:                      Not Supported
00:30:08.995  Directives:                            Not Supported
00:30:08.995  NVMe-MI:                               Not Supported
00:30:08.995  Virtualization Management:             Not Supported
00:30:08.995  Doorbell Buffer Config:                Not Supported
00:30:08.995  Get LBA Status Capability:             Not Supported
00:30:08.995  Command & Feature Lockdown Capability: Not Supported
00:30:08.995  Abort Command Limit:                   1
00:30:08.995  Async Event Request Limit:             1
00:30:08.995  Number of Firmware Slots:              N/A
00:30:08.995  Firmware Slot 1 Read-Only:             N/A
00:30:08.995  Firmware Activation Without Reset:     N/A
00:30:08.995  Multiple Update Detection Support:     N/A
00:30:08.995  Firmware Update Granularity:           No Information Provided
00:30:08.995  Per-Namespace SMART Log:               No
00:30:08.995  Asymmetric Namespace Access Log Page:  Not Supported
00:30:08.995  Subsystem NQN:                         nqn.2014-08.org.nvmexpress.discovery
00:30:08.996  Command Effects Log Page:              Not Supported
00:30:08.996  Get Log Page Extended Data:            Supported
00:30:08.996  Telemetry Log Pages:                   Not Supported
00:30:08.996  Persistent Event Log Pages:            Not Supported
00:30:08.996  Supported Log Pages Log Page:          May Support
00:30:08.996  Commands Supported & Effects Log Page: Not Supported
00:30:08.996  Feature Identifiers & Effects Log Page:May Support
00:30:08.996  NVMe-MI Commands & Effects Log Page:   May Support
00:30:08.996  Data Area 4 for Telemetry Log:         Not Supported
00:30:08.996  Error Log Page Entries Supported:      1
00:30:08.996  Keep Alive:                            Not Supported
00:30:08.996  
00:30:08.996  NVM Command Set Attributes
00:30:08.996  ==========================
00:30:08.996  Submission Queue Entry Size
00:30:08.996    Max:                       1
00:30:08.996    Min:                       1
00:30:08.996  Completion Queue Entry Size
00:30:08.996    Max:                       1
00:30:08.996    Min:                       1
00:30:08.996  Number of Namespaces:        0
00:30:08.996  Compare Command:             Not Supported
00:30:08.996  Write Uncorrectable Command: Not Supported
00:30:08.996  Dataset Management Command:  Not Supported
00:30:08.996  Write Zeroes Command:        Not Supported
00:30:08.996  Set Features Save Field:     Not Supported
00:30:08.996  Reservations:                Not Supported
00:30:08.996  Timestamp:                   Not Supported
00:30:08.996  Copy:                        Not Supported
00:30:08.996  Volatile Write Cache:        Not Present
00:30:08.996  Atomic Write Unit (Normal):  1
00:30:08.996  Atomic Write Unit (PFail):   1
00:30:08.996  Atomic Compare & Write Unit: 1
00:30:08.996  Fused Compare & Write:       Not Supported
00:30:08.996  Scatter-Gather List
00:30:08.996    SGL Command Set:           Supported
00:30:08.996    SGL Keyed:                 Not Supported
00:30:08.996    SGL Bit Bucket Descriptor: Not Supported
00:30:08.996    SGL Metadata Pointer:      Not Supported
00:30:08.996    Oversized SGL:             Not Supported
00:30:08.996    SGL Metadata Address:      Not Supported
00:30:08.996    SGL Offset:                Supported
00:30:08.996    Transport SGL Data Block:  Not Supported
00:30:08.996  Replay Protected Memory Block:  Not Supported
00:30:08.996  
00:30:08.996  Firmware Slot Information
00:30:08.996  =========================
00:30:08.996  Active slot:                 0
00:30:08.996  
00:30:08.996  
00:30:08.996  Error Log
00:30:08.996  =========
00:30:08.996  
00:30:08.996  Active Namespaces
00:30:08.996  =================
00:30:08.996  Discovery Log Page
00:30:08.996  ==================
00:30:08.996  Generation Counter:                    2
00:30:08.996  Number of Records:                     2
00:30:08.996  Record Format:                         0
00:30:08.996  
00:30:08.996  Discovery Log Entry 0
00:30:08.996  ----------------------
00:30:08.996  Transport Type:                        3 (TCP)
00:30:08.996  Address Family:                        1 (IPv4)
00:30:08.996  Subsystem Type:                        3 (Current Discovery Subsystem)
00:30:08.996  Entry Flags:
00:30:08.996    Duplicate Returned Information:			0
00:30:08.996    Explicit Persistent Connection Support for Discovery: 0
00:30:08.996  Transport Requirements:
00:30:08.996    Secure Channel:                      Not Specified
00:30:08.996  Port ID:                               1 (0x0001)
00:30:08.996  Controller ID:                         65535 (0xffff)
00:30:08.996  Admin Max SQ Size:                     32
00:30:08.996  Transport Service Identifier:          4420
00:30:08.996  NVM Subsystem Qualified Name:          nqn.2014-08.org.nvmexpress.discovery
00:30:08.996  Transport Address:                     10.0.0.1
00:30:08.996  Discovery Log Entry 1
00:30:08.996  ----------------------
00:30:08.996  Transport Type:                        3 (TCP)
00:30:08.996  Address Family:                        1 (IPv4)
00:30:08.996  Subsystem Type:                        2 (NVM Subsystem)
00:30:08.996  Entry Flags:
00:30:08.996    Duplicate Returned Information:			0
00:30:08.996    Explicit Persistent Connection Support for Discovery: 0
00:30:08.996  Transport Requirements:
00:30:08.996    Secure Channel:                      Not Specified
00:30:08.996  Port ID:                               1 (0x0001)
00:30:08.996  Controller ID:                         65535 (0xffff)
00:30:08.996  Admin Max SQ Size:                     32
00:30:08.996  Transport Service Identifier:          4420
00:30:08.996  NVM Subsystem Qualified Name:          nqn.2016-06.io.spdk:testnqn
00:30:08.996  Transport Address:                     10.0.0.1
00:30:08.996   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '	trtype:tcp 	adrfam:IPv4 	traddr:10.0.0.1 	trsvcid:4420 	subnqn:nqn.2016-06.io.spdk:testnqn'
00:30:09.359  get_feature(0x01) failed
00:30:09.359  get_feature(0x02) failed
00:30:09.359  get_feature(0x04) failed
00:30:09.359  =====================================================
00:30:09.359  NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn
00:30:09.359  =====================================================
00:30:09.359  Controller Capabilities/Features
00:30:09.359  ================================
00:30:09.359  Vendor ID:                             0000
00:30:09.359  Subsystem Vendor ID:                   0000
00:30:09.359  Serial Number:                         a3ee6042ec76e7711ec2
00:30:09.359  Model Number:                          SPDK-nqn.2016-06.io.spdk:testnqn
00:30:09.359  Firmware Version:                      6.8.9-20
00:30:09.359  Recommended Arb Burst:                 6
00:30:09.359  IEEE OUI Identifier:                   00 00 00
00:30:09.359  Multi-path I/O
00:30:09.359    May have multiple subsystem ports:   Yes
00:30:09.359    May have multiple controllers:       Yes
00:30:09.359    Associated with SR-IOV VF:           No
00:30:09.359  Max Data Transfer Size:                Unlimited
00:30:09.359  Max Number of Namespaces:              1024
00:30:09.359  Max Number of I/O Queues:              128
00:30:09.359  NVMe Specification Version (VS):       1.3
00:30:09.359  NVMe Specification Version (Identify): 1.3
00:30:09.359  Maximum Queue Entries:                 1024
00:30:09.359  Contiguous Queues Required:            No
00:30:09.359  Arbitration Mechanisms Supported
00:30:09.359    Weighted Round Robin:                Not Supported
00:30:09.359    Vendor Specific:                     Not Supported
00:30:09.359  Reset Timeout:                         7500 ms
00:30:09.359  Doorbell Stride:                       4 bytes
00:30:09.359  NVM Subsystem Reset:                   Not Supported
00:30:09.359  Command Sets Supported
00:30:09.359    NVM Command Set:                     Supported
00:30:09.359  Boot Partition:                        Not Supported
00:30:09.359  Memory Page Size Minimum:              4096 bytes
00:30:09.359  Memory Page Size Maximum:              4096 bytes
00:30:09.359  Persistent Memory Region:              Not Supported
00:30:09.359  Optional Asynchronous Events Supported
00:30:09.359    Namespace Attribute Notices:         Supported
00:30:09.359    Firmware Activation Notices:         Not Supported
00:30:09.359    ANA Change Notices:                  Supported
00:30:09.359    PLE Aggregate Log Change Notices:    Not Supported
00:30:09.359    LBA Status Info Alert Notices:       Not Supported
00:30:09.359    EGE Aggregate Log Change Notices:    Not Supported
00:30:09.359    Normal NVM Subsystem Shutdown event: Not Supported
00:30:09.359    Zone Descriptor Change Notices:      Not Supported
00:30:09.359    Discovery Log Change Notices:        Not Supported
00:30:09.359  Controller Attributes
00:30:09.359    128-bit Host Identifier:             Supported
00:30:09.359    Non-Operational Permissive Mode:     Not Supported
00:30:09.359    NVM Sets:                            Not Supported
00:30:09.359    Read Recovery Levels:                Not Supported
00:30:09.360    Endurance Groups:                    Not Supported
00:30:09.360    Predictable Latency Mode:            Not Supported
00:30:09.360    Traffic Based Keep ALive:            Supported
00:30:09.360    Namespace Granularity:               Not Supported
00:30:09.360    SQ Associations:                     Not Supported
00:30:09.360    UUID List:                           Not Supported
00:30:09.360    Multi-Domain Subsystem:              Not Supported
00:30:09.360    Fixed Capacity Management:           Not Supported
00:30:09.360    Variable Capacity Management:        Not Supported
00:30:09.360    Delete Endurance Group:              Not Supported
00:30:09.360    Delete NVM Set:                      Not Supported
00:30:09.360    Extended LBA Formats Supported:      Not Supported
00:30:09.360    Flexible Data Placement Supported:   Not Supported
00:30:09.360  
00:30:09.360  Controller Memory Buffer Support
00:30:09.360  ================================
00:30:09.360  Supported:                             No
00:30:09.360  
00:30:09.360  Persistent Memory Region Support
00:30:09.360  ================================
00:30:09.360  Supported:                             No
00:30:09.360  
00:30:09.360  Admin Command Set Attributes
00:30:09.360  ============================
00:30:09.360  Security Send/Receive:                 Not Supported
00:30:09.360  Format NVM:                            Not Supported
00:30:09.360  Firmware Activate/Download:            Not Supported
00:30:09.360  Namespace Management:                  Not Supported
00:30:09.360  Device Self-Test:                      Not Supported
00:30:09.360  Directives:                            Not Supported
00:30:09.360  NVMe-MI:                               Not Supported
00:30:09.360  Virtualization Management:             Not Supported
00:30:09.360  Doorbell Buffer Config:                Not Supported
00:30:09.360  Get LBA Status Capability:             Not Supported
00:30:09.360  Command & Feature Lockdown Capability: Not Supported
00:30:09.360  Abort Command Limit:                   4
00:30:09.360  Async Event Request Limit:             4
00:30:09.360  Number of Firmware Slots:              N/A
00:30:09.360  Firmware Slot 1 Read-Only:             N/A
00:30:09.360  Firmware Activation Without Reset:     N/A
00:30:09.360  Multiple Update Detection Support:     N/A
00:30:09.360  Firmware Update Granularity:           No Information Provided
00:30:09.360  Per-Namespace SMART Log:               Yes
00:30:09.360  Asymmetric Namespace Access Log Page:  Supported
00:30:09.360  ANA Transition Time                 :  10 sec
00:30:09.360  
00:30:09.360  Asymmetric Namespace Access Capabilities
00:30:09.360    ANA Optimized State               : Supported
00:30:09.360    ANA Non-Optimized State           : Supported
00:30:09.360    ANA Inaccessible State            : Supported
00:30:09.360    ANA Persistent Loss State         : Supported
00:30:09.360    ANA Change State                  : Supported
00:30:09.360    ANAGRPID is not changed           : No
00:30:09.360    Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported
00:30:09.360  
00:30:09.360  ANA Group Identifier Maximum        : 128
00:30:09.360  Number of ANA Group Identifiers     : 128
00:30:09.360  Max Number of Allowed Namespaces    : 1024
00:30:09.360  Subsystem NQN:                         nqn.2016-06.io.spdk:testnqn
00:30:09.360  Command Effects Log Page:              Supported
00:30:09.360  Get Log Page Extended Data:            Supported
00:30:09.360  Telemetry Log Pages:                   Not Supported
00:30:09.360  Persistent Event Log Pages:            Not Supported
00:30:09.360  Supported Log Pages Log Page:          May Support
00:30:09.360  Commands Supported & Effects Log Page: Not Supported
00:30:09.360  Feature Identifiers & Effects Log Page:May Support
00:30:09.360  NVMe-MI Commands & Effects Log Page:   May Support
00:30:09.360  Data Area 4 for Telemetry Log:         Not Supported
00:30:09.360  Error Log Page Entries Supported:      128
00:30:09.360  Keep Alive:                            Supported
00:30:09.360  Keep Alive Granularity:                1000 ms
00:30:09.360  
00:30:09.360  NVM Command Set Attributes
00:30:09.360  ==========================
00:30:09.360  Submission Queue Entry Size
00:30:09.360    Max:                       64
00:30:09.360    Min:                       64
00:30:09.360  Completion Queue Entry Size
00:30:09.360    Max:                       16
00:30:09.360    Min:                       16
00:30:09.360  Number of Namespaces:        1024
00:30:09.360  Compare Command:             Not Supported
00:30:09.360  Write Uncorrectable Command: Not Supported
00:30:09.360  Dataset Management Command:  Supported
00:30:09.360  Write Zeroes Command:        Supported
00:30:09.360  Set Features Save Field:     Not Supported
00:30:09.360  Reservations:                Not Supported
00:30:09.360  Timestamp:                   Not Supported
00:30:09.360  Copy:                        Not Supported
00:30:09.360  Volatile Write Cache:        Present
00:30:09.360  Atomic Write Unit (Normal):  1
00:30:09.360  Atomic Write Unit (PFail):   1
00:30:09.360  Atomic Compare & Write Unit: 1
00:30:09.360  Fused Compare & Write:       Not Supported
00:30:09.360  Scatter-Gather List
00:30:09.360    SGL Command Set:           Supported
00:30:09.360    SGL Keyed:                 Not Supported
00:30:09.360    SGL Bit Bucket Descriptor: Not Supported
00:30:09.360    SGL Metadata Pointer:      Not Supported
00:30:09.360    Oversized SGL:             Not Supported
00:30:09.360    SGL Metadata Address:      Not Supported
00:30:09.360    SGL Offset:                Supported
00:30:09.360    Transport SGL Data Block:  Not Supported
00:30:09.360  Replay Protected Memory Block:  Not Supported
00:30:09.360  
00:30:09.360  Firmware Slot Information
00:30:09.360  =========================
00:30:09.360  Active slot:                 0
00:30:09.360  
00:30:09.360  Asymmetric Namespace Access
00:30:09.360  ===========================
00:30:09.360  Change Count                    : 0
00:30:09.360  Number of ANA Group Descriptors : 1
00:30:09.360  ANA Group Descriptor            : 0
00:30:09.360    ANA Group ID                  : 1
00:30:09.360    Number of NSID Values         : 1
00:30:09.360    Change Count                  : 0
00:30:09.360    ANA State                     : 1
00:30:09.360    Namespace Identifier          : 1
00:30:09.360  
00:30:09.360  Commands Supported and Effects
00:30:09.360  ==============================
00:30:09.360  Admin Commands
00:30:09.360  --------------
00:30:09.360                    Get Log Page (02h): Supported 
00:30:09.360                        Identify (06h): Supported 
00:30:09.360                           Abort (08h): Supported 
00:30:09.360                    Set Features (09h): Supported 
00:30:09.360                    Get Features (0Ah): Supported 
00:30:09.360      Asynchronous Event Request (0Ch): Supported 
00:30:09.360                      Keep Alive (18h): Supported 
00:30:09.360  I/O Commands
00:30:09.360  ------------
00:30:09.360                           Flush (00h): Supported 
00:30:09.360                           Write (01h): Supported LBA-Change 
00:30:09.360                            Read (02h): Supported 
00:30:09.360                    Write Zeroes (08h): Supported LBA-Change 
00:30:09.360              Dataset Management (09h): Supported 
00:30:09.360  
00:30:09.360  Error Log
00:30:09.360  =========
00:30:09.360  Entry: 0
00:30:09.360  Error Count:            0x3
00:30:09.360  Submission Queue Id:    0x0
00:30:09.360  Command Id:             0x5
00:30:09.360  Phase Bit:              0
00:30:09.360  Status Code:            0x2
00:30:09.360  Status Code Type:       0x0
00:30:09.360  Do Not Retry:           1
00:30:09.360  Error Location:         0x28
00:30:09.360  LBA:                    0x0
00:30:09.360  Namespace:              0x0
00:30:09.360  Vendor Log Page:        0x0
00:30:09.360  -----------
00:30:09.360  Entry: 1
00:30:09.360  Error Count:            0x2
00:30:09.360  Submission Queue Id:    0x0
00:30:09.360  Command Id:             0x5
00:30:09.360  Phase Bit:              0
00:30:09.360  Status Code:            0x2
00:30:09.360  Status Code Type:       0x0
00:30:09.360  Do Not Retry:           1
00:30:09.360  Error Location:         0x28
00:30:09.360  LBA:                    0x0
00:30:09.360  Namespace:              0x0
00:30:09.360  Vendor Log Page:        0x0
00:30:09.360  -----------
00:30:09.360  Entry: 2
00:30:09.360  Error Count:            0x1
00:30:09.360  Submission Queue Id:    0x0
00:30:09.360  Command Id:             0x4
00:30:09.360  Phase Bit:              0
00:30:09.360  Status Code:            0x2
00:30:09.360  Status Code Type:       0x0
00:30:09.360  Do Not Retry:           1
00:30:09.360  Error Location:         0x28
00:30:09.360  LBA:                    0x0
00:30:09.360  Namespace:              0x0
00:30:09.360  Vendor Log Page:        0x0
00:30:09.360  
00:30:09.360  Number of Queues
00:30:09.360  ================
00:30:09.360  Number of I/O Submission Queues:      128
00:30:09.360  Number of I/O Completion Queues:      128
00:30:09.360  
00:30:09.360  ZNS Specific Controller Data
00:30:09.360  ============================
00:30:09.360  Zone Append Size Limit:      0
00:30:09.360  
00:30:09.360  
00:30:09.360  Active Namespaces
00:30:09.360  =================
00:30:09.360  get_feature(0x05) failed
00:30:09.360  Namespace ID:1
00:30:09.360  Command Set Identifier:                NVM (00h)
00:30:09.360  Deallocate:                            Supported
00:30:09.360  Deallocated/Unwritten Error:           Not Supported
00:30:09.360  Deallocated Read Value:                Unknown
00:30:09.360  Deallocate in Write Zeroes:            Not Supported
00:30:09.360  Deallocated Guard Field:               0xFFFF
00:30:09.360  Flush:                                 Supported
00:30:09.360  Reservation:                           Not Supported
00:30:09.360  Namespace Sharing Capabilities:        Multiple Controllers
00:30:09.360  Size (in LBAs):                        1953525168 (931GiB)
00:30:09.360  Capacity (in LBAs):                    1953525168 (931GiB)
00:30:09.360  Utilization (in LBAs):                 1953525168 (931GiB)
00:30:09.360  UUID:                                  5e02356e-39cd-434c-80d9-821160f383b9
00:30:09.360  Thin Provisioning:                     Not Supported
00:30:09.360  Per-NS Atomic Units:                   Yes
00:30:09.360    Atomic Boundary Size (Normal):       0
00:30:09.360    Atomic Boundary Size (PFail):        0
00:30:09.360    Atomic Boundary Offset:              0
00:30:09.360  NGUID/EUI64 Never Reused:              No
00:30:09.360  ANA group ID:                          1
00:30:09.360  Namespace Write Protected:             No
00:30:09.360  Number of LBA Formats:                 1
00:30:09.360  Current LBA Format:                    LBA Format #00
00:30:09.360  LBA Format #00: Data Size:   512  Metadata Size:     0
00:30:09.360  
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:30:09.360  rmmod nvme_tcp
00:30:09.360  rmmod nvme_fabrics
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:30:09.360   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore
00:30:09.361   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:30:09.361   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns
00:30:09.361   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:30:09.361   00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:30:09.361    00:11:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:30:11.297   00:11:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:30:11.297   00:11:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target
00:30:11.297   00:11:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]]
00:30:11.297   00:11:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0
00:30:11.297   00:11:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn
00:30:11.297   00:11:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:30:11.297   00:11:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1
00:30:11.297   00:11:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:30:11.297   00:11:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*)
00:30:11.297   00:11:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet
00:30:11.297   00:11:27 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:30:14.589  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:30:14.589  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:30:15.158  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:30:15.158  
00:30:15.158  real	0m16.608s
00:30:15.158  user	0m4.366s
00:30:15.158  sys	0m8.592s
00:30:15.158   00:11:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:30:15.158   00:11:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x
00:30:15.158  ************************************
00:30:15.158  END TEST nvmf_identify_kernel_target
00:30:15.158  ************************************
00:30:15.158   00:11:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp
00:30:15.158   00:11:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:30:15.158   00:11:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:30:15.158   00:11:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:30:15.417  ************************************
00:30:15.417  START TEST nvmf_auth_host
00:30:15.417  ************************************
00:30:15.417   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp
00:30:15.417  * Looking for test storage...
00:30:15.417  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-:
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-:
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<'
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 ))
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:30:15.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:15.417  		--rc genhtml_branch_coverage=1
00:30:15.417  		--rc genhtml_function_coverage=1
00:30:15.417  		--rc genhtml_legend=1
00:30:15.417  		--rc geninfo_all_blocks=1
00:30:15.417  		--rc geninfo_unexecuted_blocks=1
00:30:15.417  		
00:30:15.417  		'
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:30:15.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:15.417  		--rc genhtml_branch_coverage=1
00:30:15.417  		--rc genhtml_function_coverage=1
00:30:15.417  		--rc genhtml_legend=1
00:30:15.417  		--rc geninfo_all_blocks=1
00:30:15.417  		--rc geninfo_unexecuted_blocks=1
00:30:15.417  		
00:30:15.417  		'
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:30:15.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:15.417  		--rc genhtml_branch_coverage=1
00:30:15.417  		--rc genhtml_function_coverage=1
00:30:15.417  		--rc genhtml_legend=1
00:30:15.417  		--rc geninfo_all_blocks=1
00:30:15.417  		--rc geninfo_unexecuted_blocks=1
00:30:15.417  		
00:30:15.417  		'
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:30:15.417  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:30:15.417  		--rc genhtml_branch_coverage=1
00:30:15.417  		--rc genhtml_function_coverage=1
00:30:15.417  		--rc genhtml_legend=1
00:30:15.417  		--rc geninfo_all_blocks=1
00:30:15.417  		--rc geninfo_unexecuted_blocks=1
00:30:15.417  		
00:30:15.417  		'
00:30:15.417   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:30:15.417    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:30:15.417     00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:30:15.417      00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:15.417      00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:15.417      00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:15.417      00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH
00:30:15.418      00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:30:15.418    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0
00:30:15.418    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:30:15.418    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:30:15.418    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:30:15.418    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:30:15.418    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:30:15.418    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:30:15.418  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:30:15.418    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:30:15.418    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:30:15.418    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512")
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192")
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=()
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=()
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:30:15.418    00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable
00:30:15.418   00:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=()
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=()
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=()
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=()
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=()
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=()
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=()
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:30:21.987  Found 0000:af:00.0 (0x8086 - 0x159b)
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:30:21.987  Found 0000:af:00.1 (0x8086 - 0x159b)
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:30:21.987  Found net devices under 0000:af:00.0: cvl_0_0
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:30:21.987  Found net devices under 0000:af:00.1: cvl_0_1
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:30:21.987   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:30:21.988   00:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:30:21.988  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:30:21.988  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms
00:30:21.988  
00:30:21.988  --- 10.0.0.2 ping statistics ---
00:30:21.988  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:21.988  rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:30:21.988  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:30:21.988  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms
00:30:21.988  
00:30:21.988  --- 10.0.0.1 ping statistics ---
00:30:21.988  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:30:21.988  rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3221614
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3221614
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3221614 ']'
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:30:21.988     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6fb343788940668399e14f4f4d438ec3
00:30:21.988     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Thy
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6fb343788940668399e14f4f4d438ec3 0
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6fb343788940668399e14f4f4d438ec3 0
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6fb343788940668399e14f4f4d438ec3
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Thy
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Thy
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Thy
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64
00:30:21.988     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=32b37ce3ec60ac8794f4f65484c3f6d637a3c0a190fadd4b69bde2f41075811e
00:30:21.988     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4Wr
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 32b37ce3ec60ac8794f4f65484c3f6d637a3c0a190fadd4b69bde2f41075811e 3
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 32b37ce3ec60ac8794f4f65484c3f6d637a3c0a190fadd4b69bde2f41075811e 3
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=32b37ce3ec60ac8794f4f65484c3f6d637a3c0a190fadd4b69bde2f41075811e
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4Wr
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4Wr
00:30:21.988   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.4Wr
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48
00:30:21.988     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e793b9ec6b89e82bdc4314db9a86b94954fb729387dbedd6
00:30:21.988     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:30:21.988    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Prq
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e793b9ec6b89e82bdc4314db9a86b94954fb729387dbedd6 0
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e793b9ec6b89e82bdc4314db9a86b94954fb729387dbedd6 0
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e793b9ec6b89e82bdc4314db9a86b94954fb729387dbedd6
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Prq
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Prq
00:30:21.989   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.Prq
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48
00:30:21.989     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=82a5dc93fec36a2edc41eae2bc709df25f8f4f3e78f58485
00:30:21.989     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.ZI7
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 82a5dc93fec36a2edc41eae2bc709df25f8f4f3e78f58485 2
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 82a5dc93fec36a2edc41eae2bc709df25f8f4f3e78f58485 2
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=82a5dc93fec36a2edc41eae2bc709df25f8f4f3e78f58485
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.ZI7
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.ZI7
00:30:21.989   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.ZI7
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:30:21.989     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cb74f3378f3ea40081cab2a3e9d1e425
00:30:21.989     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Uoy
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cb74f3378f3ea40081cab2a3e9d1e425 1
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cb74f3378f3ea40081cab2a3e9d1e425 1
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cb74f3378f3ea40081cab2a3e9d1e425
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Uoy
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Uoy
00:30:21.989   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Uoy
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:30:21.989     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=663b5fa583698a5d3cedbe23f95d35a5
00:30:21.989     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tSG
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 663b5fa583698a5d3cedbe23f95d35a5 1
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 663b5fa583698a5d3cedbe23f95d35a5 1
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=663b5fa583698a5d3cedbe23f95d35a5
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tSG
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tSG
00:30:21.989   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.tSG
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48
00:30:21.989     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9dd3661a8daeef84163a7d05671a046d649923df785b81e9
00:30:21.989     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.GPH
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9dd3661a8daeef84163a7d05671a046d649923df785b81e9 2
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9dd3661a8daeef84163a7d05671a046d649923df785b81e9 2
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9dd3661a8daeef84163a7d05671a046d649923df785b81e9
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.GPH
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.GPH
00:30:21.989   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.GPH
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:30:21.989    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:30:21.990    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null
00:30:21.990    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32
00:30:21.990     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=32834b998902e259a4161f116a93cc69
00:30:22.249     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.X9O
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 32834b998902e259a4161f116a93cc69 0
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 32834b998902e259a4161f116a93cc69 0
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=32834b998902e259a4161f116a93cc69
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.X9O
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.X9O
00:30:22.249   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.X9O
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3')
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64
00:30:22.249     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=62e76f9fe29d4f1c69f492356eef1b695b9252b606e2c9122fee7d7c135f169b
00:30:22.249     00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.uFH
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 62e76f9fe29d4f1c69f492356eef1b695b9252b606e2c9122fee7d7c135f169b 3
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 62e76f9fe29d4f1c69f492356eef1b695b9252b606e2c9122fee7d7c135f169b 3
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=62e76f9fe29d4f1c69f492356eef1b695b9252b606e2c9122fee7d7c135f169b
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python -
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.uFH
00:30:22.249    00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.uFH
00:30:22.249   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.uFH
00:30:22.249   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]=
00:30:22.249   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3221614
00:30:22.249   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3221614 ']'
00:30:22.249   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:30:22.249   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100
00:30:22.249   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:30:22.249  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:30:22.249   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable
00:30:22.249   00:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:22.508   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:30:22.508   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0
00:30:22.508   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:30:22.508   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Thy
00:30:22.508   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.4Wr ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4Wr
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.Prq
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.ZI7 ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ZI7
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Uoy
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.tSG ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tSG
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.GPH
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.X9O ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.X9O
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}"
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.uFH
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init
00:30:22.509    00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip
00:30:22.509    00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:22.509    00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:22.509    00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:22.509    00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:22.509    00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:22.509    00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:22.509    00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:22.509    00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:22.509    00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:22.509    00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]]
00:30:22.509   00:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:30:25.041  Waiting for block devices as requested
00:30:25.041  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:30:25.300  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:30:25.300  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:30:25.300  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:30:25.300  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:30:25.558  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:30:25.558  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:30:25.558  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:30:25.558  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:30:25.817  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:30:25.817  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:30:25.817  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:30:26.076  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:30:26.076  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:30:26.076  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:30:26.076  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:30:26.334  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]]
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1
00:30:26.901  No valid GPT data, bailing
00:30:26.901    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt=
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]]
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420
00:30:26.901   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420
00:30:26.902  
00:30:26.902  Discovery Log Number of Records 2, Generation counter 2
00:30:26.902  =====Discovery Log Entry 0======
00:30:26.902  trtype:  tcp
00:30:26.902  adrfam:  ipv4
00:30:26.902  subtype: current discovery subsystem
00:30:26.902  treq:    not specified, sq flow control disable supported
00:30:26.902  portid:  1
00:30:26.902  trsvcid: 4420
00:30:26.902  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:30:26.902  traddr:  10.0.0.1
00:30:26.902  eflags:  none
00:30:26.902  sectype: none
00:30:26.902  =====Discovery Log Entry 1======
00:30:26.902  trtype:  tcp
00:30:26.902  adrfam:  ipv4
00:30:26.902  subtype: nvme subsystem
00:30:26.902  treq:    not specified, sq flow control disable supported
00:30:26.902  portid:  1
00:30:26.902  trsvcid: 4420
00:30:26.902  subnqn:  nqn.2024-02.io.spdk:cnode0
00:30:26.902  traddr:  10.0.0.1
00:30:26.902  eflags:  none
00:30:26.902  sectype: none
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=,
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=,
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:26.902    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:26.902   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.161  nvme0n1
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.161    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:27.161    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:27.161    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.161    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.161    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}"
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:27.161   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:30:27.162   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.162   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.162   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.162    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:27.162    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:27.162    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:27.162    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:27.162    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:27.162    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:27.162    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:27.162    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:27.162    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:27.162    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:27.162    00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:27.162   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:27.162   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.162   00:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.421  nvme0n1
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:27.421    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.421   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.681  nvme0n1
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:27.681    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.681   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.940  nvme0n1
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:27.940  nvme0n1
00:30:27.940   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:27.940    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:27.941    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:27.941    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:27.941    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.199    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.199    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:28.199    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:28.199    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:28.199    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:28.199    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:28.199    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:28.199    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:28.199    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:28.199    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:28.199    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:28.199    00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.199   00:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.199  nvme0n1
00:30:28.199   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.199    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:28.199    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:28.199    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.199    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.199    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.457   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:28.457   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:28.457   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.457   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.457   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.458  nvme0n1
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.458    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.458   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.717  nvme0n1
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.717    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.717   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.979  nvme0n1
00:30:28.979   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:28.979    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.980    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:28.980    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:28.980   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:28.980   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:28.980   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:28.980   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:29.238   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:29.239   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:29.239   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:30:29.239   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:29.239   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:29.239   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:29.239    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:29.239    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:29.239    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:29.239    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:29.239    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:29.239    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:29.239    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:29.239    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:29.239    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:29.239    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:29.239    00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:29.239   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:29.239   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:29.239   00:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:29.239  nvme0n1
00:30:29.239   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:29.239    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:29.239    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:29.239    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:29.239    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:29.239    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:29.239   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:29.239   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:29.239   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:29.239   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:29.497  nvme0n1
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:29.497    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:29.497   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:29.756  nvme0n1
00:30:29.756   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:29.756    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.014    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.014   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.014    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:30.014    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:30.014    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:30.014    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:30.014    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:30.014    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:30.014    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:30.014    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:30.014    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:30.014    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:30.015    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:30.015   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:30.015   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.015   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.273  nvme0n1
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.273    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:30.273    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:30.273    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.273    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.273    00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.273   00:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.273   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.273    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:30.273    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:30.273    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:30.273    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:30.273    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:30.273    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:30.273    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:30.273    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:30.274    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:30.274    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:30.274    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:30.274   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:30.274   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.274   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.532  nvme0n1
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:30.532    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.532   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.790  nvme0n1
00:30:30.790   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.790    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:30.790    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:30.790    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.790    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.790    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.790   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:30.790   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:30.790   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.790   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.790   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.790   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:30.790   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4
00:30:30.790   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:30.790   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:30.790   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:30.790   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:30.791    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:30.791    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:30.791    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:30.791    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:30.791    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:30.791    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:30.791    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:30.791    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:30.791    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:30.791    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:30.791    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:30.791   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:31.049  nvme0n1
00:30:31.049   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:31.049    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:31.049    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:31.049    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:31.049    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:31.049    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:31.308    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:31.308    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:31.308    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:31.308    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:31.308    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:31.308    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:31.308    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:31.308    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:31.308    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:31.308    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:31.308    00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:31.308   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:31.309   00:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:31.568  nvme0n1
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:31.568    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:31.568   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:32.138  nvme0n1
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:32.138    00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:32.138   00:11:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:32.396  nvme0n1
00:30:32.396   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:32.396    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:32.396    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:32.396    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:32.396    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:32.655    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:32.655    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:32.655    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:32.655    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:32.655    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:32.655    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:32.655    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:32.655    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:32.655    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:32.655    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:32.655    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:32.655    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:32.655   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:32.914  nvme0n1
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:32.914    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:32.914    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:32.914    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:32.914    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:32.914    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:32.914   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:33.173   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:33.173    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:33.173    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:33.173    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:33.173    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:33.173    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:33.173    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:33.173    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:33.173    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:33.173    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:33.173    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:33.173    00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:33.173   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:33.173   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:33.173   00:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:33.432  nvme0n1
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:33.432    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:33.432   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:33.998  nvme0n1
00:30:33.998   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:33.998    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:33.998    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:33.998    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:33.998    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:33.998    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:33.998   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:33.998   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:33.998   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:33.998   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:33.999    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:33.999    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:33.999    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:33.999    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:33.999    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:33.999    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:33.999    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:33.999    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:33.999    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:33.999    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:33.999    00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:33.999   00:11:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:34.934  nvme0n1
00:30:34.934   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:34.934    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:34.934    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:34.934    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:34.934    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:34.935    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:34.935    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:34.935    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:34.935    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:34.935    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:34.935    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:34.935    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:34.935    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:34.935    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:34.935    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:34.935    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:34.935    00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:34.935   00:11:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:35.503  nvme0n1
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:35.503    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:35.503   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:36.071  nvme0n1
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:36.071    00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:36.071   00:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:36.639  nvme0n1
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}"
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:36.639    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:36.639   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:36.898  nvme0n1
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:36.898    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:36.898   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.157  nvme0n1
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.157    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:37.157    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:37.157    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.157    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.157    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:37.157   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.158    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:37.158    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:37.158    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:37.158    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:37.158    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:37.158    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:37.158    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:37.158    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:37.158    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:37.158    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:37.158    00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.158   00:11:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.417  nvme0n1
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:37.417    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.417   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.676  nvme0n1
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:37.676    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:37.676   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.677   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.677  nvme0n1
00:30:37.677   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.677    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:37.677    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:37.677    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.677    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.677    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.935   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.935    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:37.936   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:37.936   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.936   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.936  nvme0n1
00:30:37.936   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:37.936    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:38.194   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1
00:30:38.195   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:38.195   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:38.195   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:38.195   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:38.195   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:38.195   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:30:38.195   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.195   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.195   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.195    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:38.195    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:38.195    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:38.195    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:38.195    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:38.195    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:38.195    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:38.195    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:38.195    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:38.195    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:38.195    00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:38.195   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:38.195   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.195   00:11:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.195  nvme0n1
00:30:38.195   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.195    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:38.195    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:38.195    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.195    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.195    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.452    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:38.452    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:38.452    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:38.452    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:38.452    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:38.452    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:38.452    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:38.452    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:38.452    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:38.452    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:38.452    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.452  nvme0n1
00:30:38.452   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.452    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:38.452    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:38.453    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.453    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.453    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.453   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:38.453   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:38.453   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.453   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.711  nvme0n1
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.711    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.711   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072
00:30:38.970   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.971  nvme0n1
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.971    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:38.971   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:39.234    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:39.234    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:39.234    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:39.234    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:39.234    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:39.234    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:39.234    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:39.234    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:39.234    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:39.234    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:39.234    00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:39.234   00:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:39.234  nvme0n1
00:30:39.234   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:39.493    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:39.493    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:39.493    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:39.493    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:39.493    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:39.493   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:39.493   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:39.493   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:39.493   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:39.493   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:39.493   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:39.493   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1
00:30:39.493   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:39.493   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:39.493   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:39.493   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:39.493   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:39.493   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:39.494    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:39.494    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:39.494    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:39.494    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:39.494    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:39.494    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:39.494    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:39.494    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:39.494    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:39.494    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:39.494    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:39.494   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:39.753  nvme0n1
00:30:39.753   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:39.753    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:39.753    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:39.753    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:39.753    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:39.753    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:39.753   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:39.753   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:39.753   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:39.753   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:39.753   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:39.753   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:39.753   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2
00:30:39.753   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:39.753   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:39.753   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:39.754    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:39.754    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:39.754    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:39.754    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:39.754    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:39.754    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:39.754    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:39.754    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:39.754    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:39.754    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:39.754    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:39.754   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:40.013  nvme0n1
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:40.013    00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:40.013   00:11:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:40.272  nvme0n1
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:40.272    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:40.272    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:40.272    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:40.272    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:40.272    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:40.272   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4
00:30:40.273   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:40.273   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:40.273   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:40.273   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:40.273   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:40.273   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096
00:30:40.273   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:40.273   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:40.273   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:40.273    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:40.273    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:40.273    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:40.273    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:40.273    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:40.273    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:40.273    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:40.273    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:40.531    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:40.531    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:40.531    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:40.531   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:40.531   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:40.531   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:40.531  nvme0n1
00:30:40.531   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:40.531    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:40.531    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:40.531    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:40.531    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:40.531    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:40.790    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:40.790    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:40.790    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:40.790    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:40.790    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:40.790    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:40.790    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:40.790    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:40.790    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:40.790    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:40.790    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:40.790   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:40.791   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:40.791   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:41.049  nvme0n1
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:41.049    00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:41.049   00:11:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:41.617  nvme0n1
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:41.617    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:41.617   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:41.876  nvme0n1
00:30:41.876   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:41.876    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:41.876    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:41.876    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:41.876    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:41.876    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:42.134    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:42.134    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:42.134    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:42.134    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:42.134    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:42.134    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:42.134    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:42.134    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:42.134    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:42.134    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:42.134    00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:42.134   00:11:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:42.396  nvme0n1
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:42.396    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:42.396   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:42.963  nvme0n1
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:42.963    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:42.963    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:42.963    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:42.963    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:42.963    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:42.963   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:42.964   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:42.964   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:30:42.964   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:42.964   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:42.964   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:42.964    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:42.964    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:42.964    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:42.964    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:42.964    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:42.964    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:42.964    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:42.964    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:42.964    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:42.964    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:42.964    00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:42.964   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:42.964   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:42.964   00:11:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:43.531  nvme0n1
00:30:43.531   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:43.531    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:43.531    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:43.531    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:43.531    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:43.532    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:43.532    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:43.532    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:43.532    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:43.532    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:43.532    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:43.532    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:43.532    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:43.532    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:43.532    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:43.532    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:43.532    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:43.532   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:44.100  nvme0n1
00:30:44.100   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:44.100    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:44.100    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:44.100    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:44.100    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:44.100    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:44.100   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:44.100   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:44.100   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:44.100   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:44.358    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:44.358    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:44.358    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:44.358    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:44.358    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:44.358    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:44.358    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:44.358    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:44.358    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:44.358    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:44.358    00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:44.358   00:11:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:44.923  nvme0n1
00:30:44.923   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:44.923    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:44.923    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:44.923    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:44.923    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:44.923    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:44.923   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:44.924    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:44.924    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:44.924    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:44.924    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:44.924    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:44.924    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:44.924    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:44.924    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:44.924    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:44.924    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:44.924    00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:44.924   00:12:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:45.603  nvme0n1
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)'
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:45.604    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:45.604   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:46.184  nvme0n1
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}"
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:46.184    00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:46.184   00:12:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:46.443  nvme0n1
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:46.443    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:46.443    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:46.443    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:46.443    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:46.443    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:46.443   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:46.444   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:46.444   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:46.444   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:46.444   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:30:46.444   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:46.444   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:46.444   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:46.444    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:46.444    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:46.444    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:46.444    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:46.444    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:46.444    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:46.444    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:46.444    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:46.444    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:46.444    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:46.444    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:46.444   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:46.444   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:46.444   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:46.704  nvme0n1
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:46.704    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:46.704   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.033  nvme0n1
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.033    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:47.033    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:47.033    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.033    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.033    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:47.033   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.034  nvme0n1
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:47.034    00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.034   00:12:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.293  nvme0n1
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.293    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:47.293    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:47.293    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.293    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.293    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.293   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.293    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:47.293    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:47.294    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:47.294    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:47.294    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:47.294    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:47.294    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:47.294    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:47.294    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:47.294    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:47.294    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:47.294   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:47.294   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.294   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.553  nvme0n1
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:47.553    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.553   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.813  nvme0n1
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:47.813    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:47.813   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.072  nvme0n1
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.072    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:48.072    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:48.072    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.072    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.072    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.072   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.072    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:48.072    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:48.072    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:48.072    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:48.072    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:48.073    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:48.073    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:48.073    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:48.073    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:48.073    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:48.073    00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:48.073   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:48.073   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.073   00:12:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.332  nvme0n1
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:48.332    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.332   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.590  nvme0n1
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.590    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:48.590    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:48.590    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.590    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.590    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.590   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.590    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:48.590    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:48.590    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:48.590    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:48.590    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:48.590    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:48.591    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:48.591    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:48.591    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:48.591    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:48.591    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:48.591   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:48.591   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.591   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.849  nvme0n1
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.849    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:48.849    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:48.849    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.849    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.849    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:48.849   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:48.849    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:48.849    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:48.849    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:48.850    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:48.850    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:48.850    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:48.850    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:48.850    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:48.850    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:48.850    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:48.850    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:49.109   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:49.109   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:49.109   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:49.109  nvme0n1
00:30:49.109   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:49.109    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:49.109    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:49.109    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:49.109    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:49.109    00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:49.367   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:49.367   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:49.367   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:49.367   00:12:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:49.367   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:49.367   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:49.367   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2
00:30:49.367   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:49.367   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:49.367   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:49.367   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:49.367   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:49.367   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:49.367   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:49.368    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:49.368    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:49.368    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:49.368    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:49.368    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:49.368    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:49.368    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:49.368    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:49.368    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:49.368    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:49.368    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:49.368   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:49.627  nvme0n1
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:49.627    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:49.627    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:49.627    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:49.627    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:49.627    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:49.627   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:49.628   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:49.628   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:30:49.628   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:49.628   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:49.628   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:49.628    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:49.628    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:49.628    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:49.628    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:49.628    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:49.628    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:49.628    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:49.628    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:49.628    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:49.628    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:49.628    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:49.628   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:49.628   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:49.628   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:49.887  nvme0n1
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:49.887    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:49.887    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:49.887    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:49.887    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:49.887    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:49.887   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:49.887    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:49.887    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:49.887    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:49.887    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:49.887    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:49.888    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:49.888    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:49.888    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:49.888    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:49.888    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:49.888    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:49.888   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:49.888   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:49.888   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:50.146  nvme0n1
00:30:50.146   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:50.146    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:50.146    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:50.146    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:50.146    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:50.146    00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:50.146   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:50.146   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:50.146   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:50.146   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:50.146   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:50.146   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:50.146   00:12:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:50.146   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0
00:30:50.146   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:50.146   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:50.146   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:50.146   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:50.146   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:50.146   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:50.146   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:50.405    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:50.405    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:50.405    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:50.405    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:50.405    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:50.405    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:50.405    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:50.405    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:50.405    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:50.405    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:50.405    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:50.405   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:50.406   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:50.406   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:50.664  nvme0n1
00:30:50.664   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:50.664    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:50.664    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:50.664    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:50.664    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:50.665    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:50.665    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:50.665    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:50.665    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:50.665    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:50.665    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:50.665    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:50.665    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:50.665    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:50.665    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:50.665    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:50.665    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:50.665   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:51.232  nvme0n1
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:51.233    00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:51.233   00:12:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:51.492  nvme0n1
00:30:51.492   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:51.492    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:51.492    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:51.492    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:51.492    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:51.492    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:51.493   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:51.493   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:51.493   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:51.493   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:51.751    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:51.751    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:51.751    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:51.751    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:51.751    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:51.751    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:51.751    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:51.751    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:51.751    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:51.751    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:51.751    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:51.751   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:52.010  nvme0n1
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:52.010    00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:52.010   00:12:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:52.577  nvme0n1
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}"
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmZiMzQzNzg4OTQwNjY4Mzk5ZTE0ZjRmNGQ0MzhlYzOTwJL0:
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=: ]]
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MzJiMzdjZTNlYzYwYWM4Nzk0ZjRmNjU0ODRjM2Y2ZDYzN2EzYzBhMTkwZmFkZDRiNjliZGUyZjQxMDc1ODExZU0aWL8=:
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:52.577    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:52.577   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:53.146  nvme0n1
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:53.146    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:53.146    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:53.146    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:53.146    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:53.146    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:53.146   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:53.146    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:53.146    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:53.146    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:53.146    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:53.146    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:53.146    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:53.146    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:53.146    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:53.147    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:53.147    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:53.147    00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:53.147   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1
00:30:53.147   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:53.147   00:12:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:53.715  nvme0n1
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:53.715    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:53.715    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:53.715    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:53.715    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:53.715    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:53.715   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:53.974   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:53.974    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:53.974    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:53.974    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:53.974    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:53.974    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:53.974    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:53.974    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:53.974    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:53.974    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:53.974    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:53.974    00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:53.974   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:53.974   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:53.974   00:12:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:54.543  nvme0n1
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:54.543    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:54.543    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:54.543    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:54.543    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:54.543    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OWRkMzY2MWE4ZGFlZWY4NDE2M2E3ZDA1NjcxYTA0NmQ2NDk5MjNkZjc4NWI4MWU5Z2tU4A==:
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO: ]]
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzI4MzRiOTk4OTAyZTI1OWE0MTYxZjExNmE5M2NjNjkBEanO:
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3
00:30:54.543   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:54.544   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:30:54.544   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:54.544   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:54.544   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:54.544    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:54.544    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:54.544    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:54.544    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:54.544    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:54.544    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:54.544    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:54.544    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:54.544    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:54.544    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:54.544    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:54.544   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3
00:30:54.544   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:54.544   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:55.111  nvme0n1
00:30:55.111   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:55.111    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:55.111    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:55.111    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:55.111    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:55.111    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:55.111   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:55.111   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:55.111   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:55.111   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:55.111   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:55.111   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}"
00:30:55.111   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4
00:30:55.111   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)'
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjJlNzZmOWZlMjlkNGYxYzY5ZjQ5MjM1NmVlZjFiNjk1YjkyNTJiNjA2ZTJjOTEyMmZlZTdkN2MxMzVmMTY5YkXp4J4=:
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]]
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"})
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:55.112    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip
00:30:55.112    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:55.112    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:55.112    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:55.112    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:55.112    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:55.112    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:55.112    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:55.112    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:55.112    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:55.112    00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:55.112   00:12:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:55.679  nvme0n1
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name'
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:55.679   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:55.679    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:55.939  request:
00:30:55.939  {
00:30:55.939  "name": "nvme0",
00:30:55.939  "trtype": "tcp",
00:30:55.939  "traddr": "10.0.0.1",
00:30:55.939  "adrfam": "ipv4",
00:30:55.939  "trsvcid": "4420",
00:30:55.939  "subnqn": "nqn.2024-02.io.spdk:cnode0",
00:30:55.939  "hostnqn": "nqn.2024-02.io.spdk:host0",
00:30:55.939  "prchk_reftag": false,
00:30:55.939  "prchk_guard": false,
00:30:55.939  "hdgst": false,
00:30:55.939  "ddgst": false,
00:30:55.939  "allow_unrecognized_csi": false,
00:30:55.939  "method": "bdev_nvme_attach_controller",
00:30:55.939  "req_id": 1
00:30:55.939  }
00:30:55.939  Got JSON-RPC error response
00:30:55.939  response:
00:30:55.939  {
00:30:55.939  "code": -5,
00:30:55.939  "message": "Input/output error"
00:30:55.939  }
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:55.939   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 ))
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:55.939    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:55.940  request:
00:30:55.940  {
00:30:55.940  "name": "nvme0",
00:30:55.940  "trtype": "tcp",
00:30:55.940  "traddr": "10.0.0.1",
00:30:55.940  "adrfam": "ipv4",
00:30:55.940  "trsvcid": "4420",
00:30:55.940  "subnqn": "nqn.2024-02.io.spdk:cnode0",
00:30:55.940  "hostnqn": "nqn.2024-02.io.spdk:host0",
00:30:55.940  "prchk_reftag": false,
00:30:55.940  "prchk_guard": false,
00:30:55.940  "hdgst": false,
00:30:55.940  "ddgst": false,
00:30:55.940  "dhchap_key": "key2",
00:30:55.940  "allow_unrecognized_csi": false,
00:30:55.940  "method": "bdev_nvme_attach_controller",
00:30:55.940  "req_id": 1
00:30:55.940  }
00:30:55.940  Got JSON-RPC error response
00:30:55.940  response:
00:30:55.940  {
00:30:55.940  "code": -5,
00:30:55.940  "message": "Input/output error"
00:30:55.940  }
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 ))
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:30:55.940    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:55.940   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:56.199  request:
00:30:56.199  {
00:30:56.199  "name": "nvme0",
00:30:56.199  "trtype": "tcp",
00:30:56.199  "traddr": "10.0.0.1",
00:30:56.199  "adrfam": "ipv4",
00:30:56.199  "trsvcid": "4420",
00:30:56.199  "subnqn": "nqn.2024-02.io.spdk:cnode0",
00:30:56.199  "hostnqn": "nqn.2024-02.io.spdk:host0",
00:30:56.199  "prchk_reftag": false,
00:30:56.199  "prchk_guard": false,
00:30:56.199  "hdgst": false,
00:30:56.199  "ddgst": false,
00:30:56.199  "dhchap_key": "key1",
00:30:56.199  "dhchap_ctrlr_key": "ckey2",
00:30:56.199  "allow_unrecognized_csi": false,
00:30:56.199  "method": "bdev_nvme_attach_controller",
00:30:56.199  "req_id": 1
00:30:56.199  }
00:30:56.199  Got JSON-RPC error response
00:30:56.199  response:
00:30:56.199  {
00:30:56.199  "code": -5,
00:30:56.199  "message": "Input/output error"
00:30:56.199  }
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:30:56.199    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip
00:30:56.199    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:56.199    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:56.199    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:56.199    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:56.199    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:56.199    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:56.199    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:56.199    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:56.199    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:56.199    00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:56.199  nvme0n1
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:56.199   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:56.200   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2
00:30:56.200   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:56.200   00:12:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:56.200   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:56.200    00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers
00:30:56.200    00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name'
00:30:56.200    00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:56.200    00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:56.200    00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]]
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:30:56.458    00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:56.458  request:
00:30:56.458  {
00:30:56.458  "name": "nvme0",
00:30:56.458  "dhchap_key": "key1",
00:30:56.458  "dhchap_ctrlr_key": "ckey2",
00:30:56.458  "method": "bdev_nvme_set_keys",
00:30:56.458  "req_id": 1
00:30:56.458  }
00:30:56.458  Got JSON-RPC error response
00:30:56.458  response:
00:30:56.458  {
00:30:56.458  "code": -13,
00:30:56.458  "message": "Permission denied"
00:30:56.458  }
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:30:56.458    00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers
00:30:56.458    00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length
00:30:56.458    00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:56.458    00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:56.458    00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 ))
00:30:56.458   00:12:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s
00:30:57.402    00:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers
00:30:57.402    00:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length
00:30:57.402    00:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:57.402    00:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:57.402    00:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:57.402   00:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 ))
00:30:57.402   00:12:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 ))
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZTc5M2I5ZWM2Yjg5ZTgyYmRjNDMxNGRiOWE4NmI5NDk1NGZiNzI5Mzg3ZGJlZGQ2l9GgXg==:
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==: ]]
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODJhNWRjOTNmZWMzNmEyZWRjNDFlYWUyYmM3MDlkZjI1ZjhmNGYzZTc4ZjU4NDg1VmaNgQ==:
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=()
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:58.779  nvme0n1
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)'
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2I3NGYzMzc4ZjNlYTQwMDgxY2FiMmEzZTlkMWU0MjU0L2Wz:
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s: ]]
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjYzYjVmYTU4MzY5OGE1ZDNjZWRiZTIzZjk1ZDM1YTXIAI/s:
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:58.779  request:
00:30:58.779  {
00:30:58.779  "name": "nvme0",
00:30:58.779  "dhchap_key": "key2",
00:30:58.779  "dhchap_ctrlr_key": "ckey1",
00:30:58.779  "method": "bdev_nvme_set_keys",
00:30:58.779  "req_id": 1
00:30:58.779  }
00:30:58.779  Got JSON-RPC error response
00:30:58.779  response:
00:30:58.779  {
00:30:58.779  "code": -13,
00:30:58.779  "message": "Permission denied"
00:30:58.779  }
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:58.779    00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 ))
00:30:58.779   00:12:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s
00:30:59.715    00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers
00:30:59.715    00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length
00:30:59.715    00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable
00:30:59.715    00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:30:59.974    00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 ))
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20}
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:30:59.974  rmmod nvme_tcp
00:30:59.974  rmmod nvme_fabrics
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3221614 ']'
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3221614
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3221614 ']'
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3221614
00:30:59.974    00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:30:59.974    00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3221614
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3221614'
00:30:59.974  killing process with pid 3221614
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3221614
00:30:59.974   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3221614
00:31:00.233   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:31:00.233   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:31:00.233   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:31:00.233   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr
00:31:00.233   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore
00:31:00.233   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save
00:31:00.233   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:31:00.233   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:31:00.233   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns
00:31:00.233   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:31:00.233   00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:31:00.233    00:12:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:31:02.135   00:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:31:02.135   00:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0
00:31:02.135   00:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0
00:31:02.135   00:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target
00:31:02.135   00:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]]
00:31:02.135   00:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0
00:31:02.135   00:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0
00:31:02.135   00:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1
00:31:02.135   00:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1
00:31:02.135   00:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0
00:31:02.135   00:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*)
00:31:02.135   00:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet
00:31:02.393   00:12:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:31:04.929  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:31:04.929  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:31:04.929  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:31:04.929  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:31:04.929  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:31:05.188  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:31:05.188  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:31:05.188  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:31:05.188  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:31:05.188  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:31:05.188  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:31:05.188  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:31:05.188  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:31:05.188  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:31:05.188  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:31:05.188  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:31:06.127  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:31:06.127   00:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Thy /tmp/spdk.key-null.Prq /tmp/spdk.key-sha256.Uoy /tmp/spdk.key-sha384.GPH /tmp/spdk.key-sha512.uFH /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log
00:31:06.127   00:12:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:31:09.421  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:31:09.421  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:31:09.421  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:31:09.421  
00:31:09.421  real	0m53.715s
00:31:09.421  user	0m48.443s
00:31:09.421  sys	0m12.412s
00:31:09.421   00:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:09.421   00:12:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x
00:31:09.421  ************************************
00:31:09.421  END TEST nvmf_auth_host
00:31:09.421  ************************************
00:31:09.421   00:12:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]]
00:31:09.421   00:12:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp
00:31:09.421   00:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:31:09.421   00:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:09.421   00:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:31:09.421  ************************************
00:31:09.421  START TEST nvmf_digest
00:31:09.421  ************************************
00:31:09.421   00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp
00:31:09.421  * Looking for test storage...
00:31:09.421  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:31:09.421     00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version
00:31:09.421     00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-:
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-:
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<'
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 ))
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:31:09.421     00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1
00:31:09.421     00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1
00:31:09.421     00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:31:09.421     00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1
00:31:09.421     00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2
00:31:09.421     00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2
00:31:09.421     00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:31:09.421     00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:31:09.421  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:09.421  		--rc genhtml_branch_coverage=1
00:31:09.421  		--rc genhtml_function_coverage=1
00:31:09.421  		--rc genhtml_legend=1
00:31:09.421  		--rc geninfo_all_blocks=1
00:31:09.421  		--rc geninfo_unexecuted_blocks=1
00:31:09.421  		
00:31:09.421  		'
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:31:09.421  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:09.421  		--rc genhtml_branch_coverage=1
00:31:09.421  		--rc genhtml_function_coverage=1
00:31:09.421  		--rc genhtml_legend=1
00:31:09.421  		--rc geninfo_all_blocks=1
00:31:09.421  		--rc geninfo_unexecuted_blocks=1
00:31:09.421  		
00:31:09.421  		'
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:31:09.421  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:09.421  		--rc genhtml_branch_coverage=1
00:31:09.421  		--rc genhtml_function_coverage=1
00:31:09.421  		--rc genhtml_legend=1
00:31:09.421  		--rc geninfo_all_blocks=1
00:31:09.421  		--rc geninfo_unexecuted_blocks=1
00:31:09.421  		
00:31:09.421  		'
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:31:09.421  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:09.421  		--rc genhtml_branch_coverage=1
00:31:09.421  		--rc genhtml_function_coverage=1
00:31:09.421  		--rc genhtml_legend=1
00:31:09.421  		--rc geninfo_all_blocks=1
00:31:09.421  		--rc geninfo_unexecuted_blocks=1
00:31:09.421  		
00:31:09.421  		'
00:31:09.421   00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:31:09.421     00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:31:09.421    00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:31:09.421     00:12:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:31:09.421    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:31:09.421    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:31:09.421    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:31:09.421    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:31:09.421    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:31:09.421    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:31:09.421    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:31:09.422     00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob
00:31:09.422     00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:31:09.422     00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:31:09.422     00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:31:09.422      00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:09.422      00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:09.422      00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:09.422      00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH
00:31:09.422      00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:09.422    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0
00:31:09.422    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:31:09.422    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:31:09.422    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:31:09.422    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:31:09.422    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:31:09.422    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:31:09.422  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:31:09.422    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:31:09.422    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:31:09.422    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]]
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:31:09.422    00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable
00:31:09.422   00:12:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=()
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=()
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=()
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=()
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=()
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=()
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=()
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx
00:31:15.995   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:31:15.996  Found 0000:af:00.0 (0x8086 - 0x159b)
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:31:15.996  Found 0000:af:00.1 (0x8086 - 0x159b)
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:31:15.996  Found net devices under 0000:af:00.0: cvl_0_0
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:31:15.996  Found net devices under 0000:af:00.1: cvl_0_1
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:31:15.996  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:31:15.996  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms
00:31:15.996  
00:31:15.996  --- 10.0.0.2 ping statistics ---
00:31:15.996  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:15.996  rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:31:15.996  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:31:15.996  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms
00:31:15.996  
00:31:15.996  --- 10.0.0.1 ping statistics ---
00:31:15.996  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:15.996  rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0
00:31:15.996   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]]
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x
00:31:15.997  ************************************
00:31:15.997  START TEST nvmf_digest_clean
00:31:15.997  ************************************
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]]
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc")
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3235863
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3235863
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3235863 ']'
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:31:15.997  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:15.997   00:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:31:15.997  [2024-12-10 00:12:31.011504] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:15.997  [2024-12-10 00:12:31.011546] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:31:15.997  [2024-12-10 00:12:31.086726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:15.997  [2024-12-10 00:12:31.123647] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:31:15.997  [2024-12-10 00:12:31.123681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:31:15.997  [2024-12-10 00:12:31.123688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:31:15.997  [2024-12-10 00:12:31.123693] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:31:15.997  [2024-12-10 00:12:31.123699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:31:15.997  [2024-12-10 00:12:31.124186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]]
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:31:15.997  null0
00:31:15.997  [2024-12-10 00:12:31.290740] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:31:15.997  [2024-12-10 00:12:31.314931] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3235912
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3235912 /var/tmp/bperf.sock
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3235912 ']'
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:31:15.997  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:31:15.997  [2024-12-10 00:12:31.366408] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:15.997  [2024-12-10 00:12:31.366447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3235912 ]
00:31:15.997  [2024-12-10 00:12:31.439861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:15.997  [2024-12-10 00:12:31.480979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:15.997   00:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:16.256  nvme0n1
00:31:16.256   00:12:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests
00:31:16.256   00:12:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:31:16.256  Running I/O for 2 seconds...
00:31:18.572      24825.00 IOPS,    96.97 MiB/s
[2024-12-09T23:12:34.429Z]     24920.50 IOPS,    97.35 MiB/s
00:31:18.572                                                                                                  Latency(us)
00:31:18.572  
[2024-12-09T23:12:34.429Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:18.572  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096)
00:31:18.572  	 nvme0n1             :       2.00   24939.25      97.42       0.00     0.00    5127.67    2637.04   17351.44
00:31:18.572  
[2024-12-09T23:12:34.429Z]  ===================================================================================================================
00:31:18.572  
[2024-12-09T23:12:34.429Z]  Total                       :              24939.25      97.42       0.00     0.00    5127.67    2637.04   17351.44
00:31:18.572  {
00:31:18.572    "results": [
00:31:18.572      {
00:31:18.572        "job": "nvme0n1",
00:31:18.572        "core_mask": "0x2",
00:31:18.572        "workload": "randread",
00:31:18.572        "status": "finished",
00:31:18.572        "queue_depth": 128,
00:31:18.572        "io_size": 4096,
00:31:18.572        "runtime": 2.003629,
00:31:18.572        "iops": 24939.24773498487,
00:31:18.572        "mibps": 97.41893646478465,
00:31:18.572        "io_failed": 0,
00:31:18.572        "io_timeout": 0,
00:31:18.572        "avg_latency_us": 5127.66712131045,
00:31:18.572        "min_latency_us": 2637.0438095238096,
00:31:18.572        "max_latency_us": 17351.43619047619
00:31:18.572      }
00:31:18.572    ],
00:31:18.572    "core_count": 1
00:31:18.572  }
00:31:18.572   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed
00:31:18.572    00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats
00:31:18.572    00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:31:18.572    00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[]
00:31:18.572  			| select(.opcode=="crc32c")
00:31:18.572  			| "\(.module_name) \(.executed)"'
00:31:18.572    00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:31:18.572   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false
00:31:18.572   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software
00:31:18.572   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 ))
00:31:18.572   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:31:18.572   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3235912
00:31:18.572   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3235912 ']'
00:31:18.572   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3235912
00:31:18.572    00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:31:18.572   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:18.572    00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3235912
00:31:18.573   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:31:18.573   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:31:18.573   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3235912'
00:31:18.573  killing process with pid 3235912
00:31:18.573   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3235912
00:31:18.573  Received shutdown signal, test time was about 2.000000 seconds
00:31:18.573  
00:31:18.573                                                                                                  Latency(us)
00:31:18.573  
[2024-12-09T23:12:34.430Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:18.573  
[2024-12-09T23:12:34.430Z]  ===================================================================================================================
00:31:18.573  
[2024-12-09T23:12:34.430Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:31:18.573   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3235912
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3236375
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3236375 /var/tmp/bperf.sock
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3236375 ']'
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:31:18.832  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:18.832   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:31:18.832  [2024-12-10 00:12:34.590447] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:18.832  [2024-12-10 00:12:34.590494] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3236375 ]
00:31:18.832  I/O size of 131072 is greater than zero copy threshold (65536).
00:31:18.832  Zero copy mechanism will not be used.
00:31:18.832  [2024-12-10 00:12:34.663459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:19.091  [2024-12-10 00:12:34.700097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:19.091   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:19.091   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:31:19.091   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false
00:31:19.091   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init
00:31:19.091   00:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:31:19.349   00:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:19.349   00:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:19.608  nvme0n1
00:31:19.608   00:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests
00:31:19.608   00:12:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:31:19.867  I/O size of 131072 is greater than zero copy threshold (65536).
00:31:19.867  Zero copy mechanism will not be used.
00:31:19.867  Running I/O for 2 seconds...
00:31:21.743       5704.00 IOPS,   713.00 MiB/s
[2024-12-09T23:12:37.600Z]      5951.50 IOPS,   743.94 MiB/s
00:31:21.743                                                                                                  Latency(us)
00:31:21.743  
[2024-12-09T23:12:37.600Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:21.743  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072)
00:31:21.743  	 nvme0n1             :       2.00    5952.13     744.02       0.00     0.00    2685.22     639.76    7521.04
00:31:21.743  
[2024-12-09T23:12:37.600Z]  ===================================================================================================================
00:31:21.743  
[2024-12-09T23:12:37.600Z]  Total                       :               5952.13     744.02       0.00     0.00    2685.22     639.76    7521.04
00:31:21.743  {
00:31:21.743    "results": [
00:31:21.743      {
00:31:21.743        "job": "nvme0n1",
00:31:21.743        "core_mask": "0x2",
00:31:21.743        "workload": "randread",
00:31:21.743        "status": "finished",
00:31:21.743        "queue_depth": 16,
00:31:21.743        "io_size": 131072,
00:31:21.743        "runtime": 2.002979,
00:31:21.743        "iops": 5952.134295966159,
00:31:21.743        "mibps": 744.0167869957698,
00:31:21.743        "io_failed": 0,
00:31:21.743        "io_timeout": 0,
00:31:21.743        "avg_latency_us": 2685.223450843179,
00:31:21.743        "min_latency_us": 639.7561904761905,
00:31:21.743        "max_latency_us": 7521.03619047619
00:31:21.743      }
00:31:21.743    ],
00:31:21.743    "core_count": 1
00:31:21.743  }
00:31:21.743   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed
00:31:21.743    00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats
00:31:21.743    00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[]
00:31:21.743  			| select(.opcode=="crc32c")
00:31:21.743  			| "\(.module_name) \(.executed)"'
00:31:21.743    00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:31:21.743    00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:31:22.005   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false
00:31:22.005   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software
00:31:22.005   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 ))
00:31:22.005   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:31:22.005   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3236375
00:31:22.005   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3236375 ']'
00:31:22.005   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3236375
00:31:22.005    00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:31:22.005   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:22.005    00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3236375
00:31:22.005   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:31:22.005   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:31:22.005   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3236375'
00:31:22.005  killing process with pid 3236375
00:31:22.005   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3236375
00:31:22.005  Received shutdown signal, test time was about 2.000000 seconds
00:31:22.005  
00:31:22.005                                                                                                  Latency(us)
00:31:22.005  
[2024-12-09T23:12:37.862Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:22.005  
[2024-12-09T23:12:37.862Z]  ===================================================================================================================
00:31:22.005  
[2024-12-09T23:12:37.862Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:31:22.005   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3236375
00:31:22.264   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false
00:31:22.264   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa
00:31:22.264   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:31:22.264   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite
00:31:22.264   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096
00:31:22.264   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128
00:31:22.264   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false
00:31:22.264   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3237010
00:31:22.264   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3237010 /var/tmp/bperf.sock
00:31:22.264   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc
00:31:22.264   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3237010 ']'
00:31:22.264   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:31:22.264   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:22.265   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:31:22.265  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:31:22.265   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:22.265   00:12:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:31:22.265  [2024-12-10 00:12:37.981020] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:22.265  [2024-12-10 00:12:37.981068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3237010 ]
00:31:22.265  [2024-12-10 00:12:38.055547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:22.265  [2024-12-10 00:12:38.095589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:22.526   00:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:22.526   00:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:31:22.526   00:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false
00:31:22.526   00:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init
00:31:22.526   00:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:31:22.786   00:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:22.786   00:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:23.044  nvme0n1
00:31:23.044   00:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests
00:31:23.044   00:12:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:31:23.044  Running I/O for 2 seconds...
00:31:25.355      28001.00 IOPS,   109.38 MiB/s
[2024-12-09T23:12:41.212Z]     28142.00 IOPS,   109.93 MiB/s
00:31:25.355                                                                                                  Latency(us)
00:31:25.355  
[2024-12-09T23:12:41.212Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:25.355  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:31:25.355  	 nvme0n1             :       2.01   28151.57     109.97       0.00     0.00    4541.07    2246.95   15666.22
00:31:25.355  
[2024-12-09T23:12:41.212Z]  ===================================================================================================================
00:31:25.355  
[2024-12-09T23:12:41.212Z]  Total                       :              28151.57     109.97       0.00     0.00    4541.07    2246.95   15666.22
00:31:25.355  {
00:31:25.355    "results": [
00:31:25.355      {
00:31:25.355        "job": "nvme0n1",
00:31:25.355        "core_mask": "0x2",
00:31:25.355        "workload": "randwrite",
00:31:25.355        "status": "finished",
00:31:25.355        "queue_depth": 128,
00:31:25.355        "io_size": 4096,
00:31:25.355        "runtime": 2.006709,
00:31:25.355        "iops": 28151.565573284417,
00:31:25.355        "mibps": 109.96705302064225,
00:31:25.355        "io_failed": 0,
00:31:25.355        "io_timeout": 0,
00:31:25.355        "avg_latency_us": 4541.065162062559,
00:31:25.355        "min_latency_us": 2246.9485714285715,
00:31:25.355        "max_latency_us": 15666.224761904761
00:31:25.355      }
00:31:25.355    ],
00:31:25.355    "core_count": 1
00:31:25.355  }
00:31:25.355   00:12:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed
00:31:25.355    00:12:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats
00:31:25.355    00:12:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:31:25.355    00:12:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[]
00:31:25.355  			| select(.opcode=="crc32c")
00:31:25.355  			| "\(.module_name) \(.executed)"'
00:31:25.355    00:12:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:31:25.355   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false
00:31:25.355   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software
00:31:25.355   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 ))
00:31:25.355   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:31:25.355   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3237010
00:31:25.355   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3237010 ']'
00:31:25.355   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3237010
00:31:25.355    00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:31:25.355   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:25.355    00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3237010
00:31:25.355   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:31:25.355   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:31:25.355   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3237010'
00:31:25.355  killing process with pid 3237010
00:31:25.355   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3237010
00:31:25.355  Received shutdown signal, test time was about 2.000000 seconds
00:31:25.355  
00:31:25.355                                                                                                  Latency(us)
00:31:25.355  
[2024-12-09T23:12:41.212Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:25.355  
[2024-12-09T23:12:41.212Z]  ===================================================================================================================
00:31:25.355  
[2024-12-09T23:12:41.212Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:31:25.355   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3237010
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3237507
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3237507 /var/tmp/bperf.sock
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3237507 ']'
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:31:25.614  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:25.614   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:31:25.614  [2024-12-10 00:12:41.346025] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:25.614  [2024-12-10 00:12:41.346075] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3237507 ]
00:31:25.614  I/O size of 131072 is greater than zero copy threshold (65536).
00:31:25.614  Zero copy mechanism will not be used.
00:31:25.614  [2024-12-10 00:12:41.420875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:25.614  [2024-12-10 00:12:41.457013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:25.874   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:25.874   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0
00:31:25.874   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false
00:31:25.874   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init
00:31:25.874   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:31:26.133   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:26.133   00:12:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:26.393  nvme0n1
00:31:26.393   00:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests
00:31:26.393   00:12:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:31:26.393  I/O size of 131072 is greater than zero copy threshold (65536).
00:31:26.393  Zero copy mechanism will not be used.
00:31:26.393  Running I/O for 2 seconds...
00:31:28.707       6650.00 IOPS,   831.25 MiB/s
[2024-12-09T23:12:44.564Z]      7064.50 IOPS,   883.06 MiB/s
00:31:28.707                                                                                                  Latency(us)
00:31:28.707  
[2024-12-09T23:12:44.564Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:28.707  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072)
00:31:28.707  	 nvme0n1             :       2.00    7062.02     882.75       0.00     0.00    2261.60    1513.57    7365.00
00:31:28.707  
[2024-12-09T23:12:44.564Z]  ===================================================================================================================
00:31:28.707  
[2024-12-09T23:12:44.564Z]  Total                       :               7062.02     882.75       0.00     0.00    2261.60    1513.57    7365.00
00:31:28.707  {
00:31:28.707    "results": [
00:31:28.707      {
00:31:28.707        "job": "nvme0n1",
00:31:28.707        "core_mask": "0x2",
00:31:28.707        "workload": "randwrite",
00:31:28.707        "status": "finished",
00:31:28.707        "queue_depth": 16,
00:31:28.707        "io_size": 131072,
00:31:28.707        "runtime": 2.003677,
00:31:28.707        "iops": 7062.016482696562,
00:31:28.707        "mibps": 882.7520603370702,
00:31:28.707        "io_failed": 0,
00:31:28.707        "io_timeout": 0,
00:31:28.707        "avg_latency_us": 2261.6026437826013,
00:31:28.707        "min_latency_us": 1513.5695238095238,
00:31:28.707        "max_latency_us": 7364.998095238096
00:31:28.707      }
00:31:28.707    ],
00:31:28.707    "core_count": 1
00:31:28.707  }
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed
00:31:28.707    00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats
00:31:28.707    00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats
00:31:28.707    00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[]
00:31:28.707  			| select(.opcode=="crc32c")
00:31:28.707  			| "\(.module_name) \(.executed)"'
00:31:28.707    00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 ))
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]]
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3237507
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3237507 ']'
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3237507
00:31:28.707    00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:28.707    00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3237507
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3237507'
00:31:28.707  killing process with pid 3237507
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3237507
00:31:28.707  Received shutdown signal, test time was about 2.000000 seconds
00:31:28.707  
00:31:28.707                                                                                                  Latency(us)
00:31:28.707  
[2024-12-09T23:12:44.564Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:28.707  
[2024-12-09T23:12:44.564Z]  ===================================================================================================================
00:31:28.707  
[2024-12-09T23:12:44.564Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:31:28.707   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3237507
00:31:28.966   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3235863
00:31:28.966   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3235863 ']'
00:31:28.966   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3235863
00:31:28.966    00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname
00:31:28.966   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:28.966    00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3235863
00:31:28.966   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:31:28.966   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:31:28.966   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3235863'
00:31:28.966  killing process with pid 3235863
00:31:28.966   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3235863
00:31:28.966   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3235863
00:31:28.966  
00:31:28.966  real	0m13.861s
00:31:28.966  user	0m26.536s
00:31:28.966  sys	0m4.568s
00:31:28.966   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:28.966   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x
00:31:28.966  ************************************
00:31:28.966  END TEST nvmf_digest_clean
00:31:28.966  ************************************
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x
00:31:29.225  ************************************
00:31:29.225  START TEST nvmf_digest_error
00:31:29.225  ************************************
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3238147
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3238147
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3238147 ']'
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:31:29.225  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:29.225   00:12:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:29.225  [2024-12-10 00:12:44.945017] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:29.225  [2024-12-10 00:12:44.945060] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:31:29.225  [2024-12-10 00:12:45.022607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:29.225  [2024-12-10 00:12:45.061230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:31:29.225  [2024-12-10 00:12:45.061264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:31:29.225  [2024-12-10 00:12:45.061271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:31:29.225  [2024-12-10 00:12:45.061277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:31:29.225  [2024-12-10 00:12:45.061283] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:31:29.225  [2024-12-10 00:12:45.061771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:29.484  [2024-12-10 00:12:45.126203] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:29.484  null0
00:31:29.484  [2024-12-10 00:12:45.221409] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:31:29.484  [2024-12-10 00:12:45.245599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3238222
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3238222 /var/tmp/bperf.sock
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3238222 ']'
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:31:29.484  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:29.484   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:29.484  [2024-12-10 00:12:45.296140] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:29.484  [2024-12-10 00:12:45.296183] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3238222 ]
00:31:29.743  [2024-12-10 00:12:45.369160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:29.743  [2024-12-10 00:12:45.407926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:29.743   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:29.743   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:31:29.743   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:31:29.743   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:31:30.002   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:31:30.002   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:30.002   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:30.002   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:30.002   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:30.002   00:12:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:30.261  nvme0n1
00:31:30.261   00:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256
00:31:30.261   00:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:30.261   00:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:30.261   00:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:30.261   00:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests
00:31:30.261   00:12:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:31:30.520  Running I/O for 2 seconds...
00:31:30.520  [2024-12-10 00:12:46.197608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.520  [2024-12-10 00:12:46.197643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.520  [2024-12-10 00:12:46.197654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.520  [2024-12-10 00:12:46.208877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.520  [2024-12-10 00:12:46.208900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.520  [2024-12-10 00:12:46.208908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.520  [2024-12-10 00:12:46.221524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.520  [2024-12-10 00:12:46.221547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.520  [2024-12-10 00:12:46.221556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.520  [2024-12-10 00:12:46.234283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.520  [2024-12-10 00:12:46.234305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.520  [2024-12-10 00:12:46.234313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.520  [2024-12-10 00:12:46.243317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.520  [2024-12-10 00:12:46.243337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.520  [2024-12-10 00:12:46.243345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.520  [2024-12-10 00:12:46.251293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.520  [2024-12-10 00:12:46.251313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.520  [2024-12-10 00:12:46.251321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.520  [2024-12-10 00:12:46.261038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.520  [2024-12-10 00:12:46.261058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.521  [2024-12-10 00:12:46.261066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.521  [2024-12-10 00:12:46.271555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.521  [2024-12-10 00:12:46.271575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.521  [2024-12-10 00:12:46.271583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.521  [2024-12-10 00:12:46.279839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.521  [2024-12-10 00:12:46.279860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.521  [2024-12-10 00:12:46.279868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.521  [2024-12-10 00:12:46.290138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.521  [2024-12-10 00:12:46.290158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.521  [2024-12-10 00:12:46.290172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.521  [2024-12-10 00:12:46.299240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.521  [2024-12-10 00:12:46.299260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.521  [2024-12-10 00:12:46.299268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.521  [2024-12-10 00:12:46.309521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.521  [2024-12-10 00:12:46.309540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.521  [2024-12-10 00:12:46.309549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.521  [2024-12-10 00:12:46.319596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.521  [2024-12-10 00:12:46.319618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.521  [2024-12-10 00:12:46.319627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.521  [2024-12-10 00:12:46.328160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.521  [2024-12-10 00:12:46.328186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.521  [2024-12-10 00:12:46.328194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.521  [2024-12-10 00:12:46.340095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.521  [2024-12-10 00:12:46.340116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.521  [2024-12-10 00:12:46.340124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.521  [2024-12-10 00:12:46.350706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.521  [2024-12-10 00:12:46.350725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.521  [2024-12-10 00:12:46.350732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.521  [2024-12-10 00:12:46.359586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.521  [2024-12-10 00:12:46.359606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.521  [2024-12-10 00:12:46.359614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.521  [2024-12-10 00:12:46.370584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.521  [2024-12-10 00:12:46.370605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.521  [2024-12-10 00:12:46.370616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.780  [2024-12-10 00:12:46.380756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.780  [2024-12-10 00:12:46.380777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.780  [2024-12-10 00:12:46.380785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.780  [2024-12-10 00:12:46.388993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.780  [2024-12-10 00:12:46.389014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.780  [2024-12-10 00:12:46.389021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.780  [2024-12-10 00:12:46.400283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.780  [2024-12-10 00:12:46.400304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.780  [2024-12-10 00:12:46.400311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.780  [2024-12-10 00:12:46.407830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.780  [2024-12-10 00:12:46.407850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.780  [2024-12-10 00:12:46.407858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.780  [2024-12-10 00:12:46.418653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.780  [2024-12-10 00:12:46.418673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.780  [2024-12-10 00:12:46.418681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.780  [2024-12-10 00:12:46.428431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.780  [2024-12-10 00:12:46.428450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.780  [2024-12-10 00:12:46.428458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.780  [2024-12-10 00:12:46.437693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.780  [2024-12-10 00:12:46.437712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.437721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.446533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.446552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.446560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.456796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.456819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.456828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.467371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.467391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:10879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.467398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.477310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.477331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.477339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.485984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.486004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.486012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.494782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.494802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.494809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.503732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.503752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.503759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.513290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.513310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.513318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.522721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.522742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.522750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.532987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.533007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.533015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.541978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.541999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.542006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.550475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.550494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.550502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.560542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.560561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.560569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.568268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.568288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.568296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.580045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.580066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.580074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.591746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.591766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.591774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.599826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.599845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.599853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.611257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.611277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.611285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.622591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.622610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.622622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:30.781  [2024-12-10 00:12:46.636181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:30.781  [2024-12-10 00:12:46.636201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:30.781  [2024-12-10 00:12:46.636209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.041  [2024-12-10 00:12:46.647402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.041  [2024-12-10 00:12:46.647422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.041  [2024-12-10 00:12:46.647429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.041  [2024-12-10 00:12:46.656077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.041  [2024-12-10 00:12:46.656096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.041  [2024-12-10 00:12:46.656104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.041  [2024-12-10 00:12:46.666178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.041  [2024-12-10 00:12:46.666197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.041  [2024-12-10 00:12:46.666204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.041  [2024-12-10 00:12:46.674671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.041  [2024-12-10 00:12:46.674691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.041  [2024-12-10 00:12:46.674699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.041  [2024-12-10 00:12:46.685709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.041  [2024-12-10 00:12:46.685729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.041  [2024-12-10 00:12:46.685736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.041  [2024-12-10 00:12:46.698326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.698346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.698354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.709550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.709569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.709577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.718065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.718088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.718096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.730271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.730290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.730298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.739981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.740000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.740007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.748597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.748616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.748623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.759957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.759976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.759984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.770949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.770968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.770976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.783020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.783040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.783048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.791818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.791837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.791845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.801848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.801867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.801875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.810472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.810491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.810499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.819885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.819906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.819914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.831029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.831048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.831056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.841446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.841465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.841474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.850887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.850907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.850914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.859663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.859682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.859690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.871230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.871250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.871258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.881138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.881158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.881172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.042  [2024-12-10 00:12:46.889475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.042  [2024-12-10 00:12:46.889497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.042  [2024-12-10 00:12:46.889505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:46.902153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:46.902178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:46.902187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:46.913388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:46.913407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:46.913415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:46.923297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:46.923316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:46.923324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:46.932373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:46.932393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:46.932400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:46.943501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:46.943520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:46.943528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:46.955217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:46.955237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:46.955245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:46.967059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:46.967079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:46.967087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:46.976823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:46.976843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:46.976852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:46.985964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:46.985984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:46.985992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:46.995694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:46.995714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:46.995721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:47.004192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:47.004212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:47.004219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:47.014037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:47.014057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:47.014065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:47.022128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:47.022147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:23483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:47.022155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:47.034247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:47.034267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:47.034275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:47.045035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:47.045055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:47.045062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:47.055650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:47.055669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:47.055677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:47.068816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.302  [2024-12-10 00:12:47.068837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.302  [2024-12-10 00:12:47.068849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.302  [2024-12-10 00:12:47.079986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.303  [2024-12-10 00:12:47.080006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.303  [2024-12-10 00:12:47.080014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.303  [2024-12-10 00:12:47.088707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.303  [2024-12-10 00:12:47.088727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.303  [2024-12-10 00:12:47.088735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.303  [2024-12-10 00:12:47.097315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.303  [2024-12-10 00:12:47.097334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.303  [2024-12-10 00:12:47.097342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.303  [2024-12-10 00:12:47.108867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.303  [2024-12-10 00:12:47.108887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.303  [2024-12-10 00:12:47.108895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.303  [2024-12-10 00:12:47.118686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.303  [2024-12-10 00:12:47.118706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.303  [2024-12-10 00:12:47.118715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.303  [2024-12-10 00:12:47.129691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.303  [2024-12-10 00:12:47.129711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.303  [2024-12-10 00:12:47.129718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.303  [2024-12-10 00:12:47.138640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.303  [2024-12-10 00:12:47.138659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.303  [2024-12-10 00:12:47.138666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.303  [2024-12-10 00:12:47.151050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.303  [2024-12-10 00:12:47.151069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.303  [2024-12-10 00:12:47.151078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.562  [2024-12-10 00:12:47.162706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.562  [2024-12-10 00:12:47.162731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.562  [2024-12-10 00:12:47.162739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.562  [2024-12-10 00:12:47.175253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.562  [2024-12-10 00:12:47.175273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:20768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.562  [2024-12-10 00:12:47.175281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.562      24786.00 IOPS,    96.82 MiB/s
[2024-12-09T23:12:47.419Z] [2024-12-10 00:12:47.187208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.562  [2024-12-10 00:12:47.187228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.562  [2024-12-10 00:12:47.187236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.562  [2024-12-10 00:12:47.195833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.562  [2024-12-10 00:12:47.195853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.562  [2024-12-10 00:12:47.195860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.562  [2024-12-10 00:12:47.208088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.562  [2024-12-10 00:12:47.208108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.562  [2024-12-10 00:12:47.208115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.562  [2024-12-10 00:12:47.219259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.562  [2024-12-10 00:12:47.219278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.562  [2024-12-10 00:12:47.219286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.562  [2024-12-10 00:12:47.228481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.228500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.228508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.240339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.240358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.240366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.251648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.251668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.251676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.263607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.263627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.263635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.272143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.272163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.272175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.284992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.285013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.285021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.293079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.293097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.293105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.304836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.304855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.304863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.315790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.315810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.315817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.324378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.324398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.324406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.336373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.336393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.336401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.349065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.349085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.349096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.358728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.358747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.358755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.366958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.366984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.366992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.376454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.376473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.376481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.386558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.386577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.386585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.563  [2024-12-10 00:12:47.397633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.563  [2024-12-10 00:12:47.397652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.563  [2024-12-10 00:12:47.397659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.564  [2024-12-10 00:12:47.406022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.564  [2024-12-10 00:12:47.406043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.564  [2024-12-10 00:12:47.406051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.564  [2024-12-10 00:12:47.416783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.564  [2024-12-10 00:12:47.416803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.564  [2024-12-10 00:12:47.416811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.426691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.426712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.426720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.436094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.436115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.436124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.445563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.445583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.445591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.455440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.455460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.455467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.463087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.463107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.463115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.472319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.472338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.472347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.482446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.482465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.482473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.492480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.492500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:4940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.492508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.501505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.501525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:10366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.501533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.510549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.510569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.510580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.521317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.521337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.521345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.531732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.531752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.531760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.541469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.541489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:9214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.541497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.549774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.549794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.549802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.561206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.823  [2024-12-10 00:12:47.561225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.823  [2024-12-10 00:12:47.561233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.823  [2024-12-10 00:12:47.574032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.824  [2024-12-10 00:12:47.574052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.824  [2024-12-10 00:12:47.574059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.824  [2024-12-10 00:12:47.586630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.824  [2024-12-10 00:12:47.586651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.824  [2024-12-10 00:12:47.586659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.824  [2024-12-10 00:12:47.594958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.824  [2024-12-10 00:12:47.594978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.824  [2024-12-10 00:12:47.594986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.824  [2024-12-10 00:12:47.605435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.824  [2024-12-10 00:12:47.605459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.824  [2024-12-10 00:12:47.605467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.824  [2024-12-10 00:12:47.618031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.824  [2024-12-10 00:12:47.618053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.824  [2024-12-10 00:12:47.618061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.824  [2024-12-10 00:12:47.626611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.824  [2024-12-10 00:12:47.626631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.824  [2024-12-10 00:12:47.626639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.824  [2024-12-10 00:12:47.636988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.824  [2024-12-10 00:12:47.637009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.824  [2024-12-10 00:12:47.637017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.824  [2024-12-10 00:12:47.646509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.824  [2024-12-10 00:12:47.646529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.824  [2024-12-10 00:12:47.646537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.824  [2024-12-10 00:12:47.655186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.824  [2024-12-10 00:12:47.655207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.824  [2024-12-10 00:12:47.655215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.824  [2024-12-10 00:12:47.665386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.824  [2024-12-10 00:12:47.665406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.824  [2024-12-10 00:12:47.665414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:31.824  [2024-12-10 00:12:47.676446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:31.824  [2024-12-10 00:12:47.676466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:31.824  [2024-12-10 00:12:47.676474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.082  [2024-12-10 00:12:47.687309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.082  [2024-12-10 00:12:47.687330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.082  [2024-12-10 00:12:47.687339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.082  [2024-12-10 00:12:47.696866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.082  [2024-12-10 00:12:47.696886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.082  [2024-12-10 00:12:47.696894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.082  [2024-12-10 00:12:47.707854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.082  [2024-12-10 00:12:47.707875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.082  [2024-12-10 00:12:47.707883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.082  [2024-12-10 00:12:47.716293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.082  [2024-12-10 00:12:47.716314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.716321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.728957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.728978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.728986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.738128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.738147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.738154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.746421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.746440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:3510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.746448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.756172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.756193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.756200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.767049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.767068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.767076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.774781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.774801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.774812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.785724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.785744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:7319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.785752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.795181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.795200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.795208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.805490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.805510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.805518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.813497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.813517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.813524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.826055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.826075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.826082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.835988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.836007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:9063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.836015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.844956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.844977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.844984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.855701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.855720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.855728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.866906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.866933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.866941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.878141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.878160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.878173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.889566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.889585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.889593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.898010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.898029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.898037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.908570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.908589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.908597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.918345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.918365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.918373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.930138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.083  [2024-12-10 00:12:47.930157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.083  [2024-12-10 00:12:47.930173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.083  [2024-12-10 00:12:47.938019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.084  [2024-12-10 00:12:47.938038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.084  [2024-12-10 00:12:47.938048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:47.948705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:47.948724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:47.948732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:47.959071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:47.959090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:47.959098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:47.967797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:47.967817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:47.967825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:47.977792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:47.977812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:47.977821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:47.987355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:47.987375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:47.987382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:47.995473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:47.995492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:47.995501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:48.004849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:48.004869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:48.004877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:48.014378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:48.014397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:48.014405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:48.023487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:48.023507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:48.023514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:48.033742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:48.033762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:48.033774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:48.042008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:48.042028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:48.042036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:48.051274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:48.051294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:48.051302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:48.060423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:48.060442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:48.060450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:48.070022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:48.070041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:48.070050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:48.079057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.343  [2024-12-10 00:12:48.079076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.343  [2024-12-10 00:12:48.079084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.343  [2024-12-10 00:12:48.087787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.344  [2024-12-10 00:12:48.087806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.344  [2024-12-10 00:12:48.087814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.344  [2024-12-10 00:12:48.098441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.344  [2024-12-10 00:12:48.098461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.344  [2024-12-10 00:12:48.098469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.344  [2024-12-10 00:12:48.109006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.344  [2024-12-10 00:12:48.109026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.344  [2024-12-10 00:12:48.109033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.344  [2024-12-10 00:12:48.117653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.344  [2024-12-10 00:12:48.117672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.344  [2024-12-10 00:12:48.117680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.344  [2024-12-10 00:12:48.128204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.344  [2024-12-10 00:12:48.128223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.344  [2024-12-10 00:12:48.128231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.344  [2024-12-10 00:12:48.136419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.344  [2024-12-10 00:12:48.136438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.344  [2024-12-10 00:12:48.136446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.344  [2024-12-10 00:12:48.146465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.344  [2024-12-10 00:12:48.146484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.344  [2024-12-10 00:12:48.146492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.344  [2024-12-10 00:12:48.156732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.344  [2024-12-10 00:12:48.156751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.344  [2024-12-10 00:12:48.156759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.344  [2024-12-10 00:12:48.165789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.344  [2024-12-10 00:12:48.165808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.344  [2024-12-10 00:12:48.165816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.344  [2024-12-10 00:12:48.175289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.344  [2024-12-10 00:12:48.175308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.344  [2024-12-10 00:12:48.175316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.344      25190.50 IOPS,    98.40 MiB/s
[2024-12-09T23:12:48.201Z] [2024-12-10 00:12:48.183757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8f4ae0)
00:31:32.344  [2024-12-10 00:12:48.183776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:32.344  [2024-12-10 00:12:48.183784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0
00:31:32.344  
00:31:32.344                                                                                                  Latency(us)
00:31:32.344  
[2024-12-09T23:12:48.201Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:32.344  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096)
00:31:32.344  	 nvme0n1             :       2.00   25216.21      98.50       0.00     0.00    5070.78    2293.76   19972.88
00:31:32.344  
[2024-12-09T23:12:48.201Z]  ===================================================================================================================
00:31:32.344  
[2024-12-09T23:12:48.201Z]  Total                       :              25216.21      98.50       0.00     0.00    5070.78    2293.76   19972.88
00:31:32.344  {
00:31:32.344    "results": [
00:31:32.344      {
00:31:32.344        "job": "nvme0n1",
00:31:32.344        "core_mask": "0x2",
00:31:32.344        "workload": "randread",
00:31:32.344        "status": "finished",
00:31:32.344        "queue_depth": 128,
00:31:32.344        "io_size": 4096,
00:31:32.344        "runtime": 2.004266,
00:31:32.344        "iops": 25216.21381593062,
00:31:32.344        "mibps": 98.50083521847898,
00:31:32.344        "io_failed": 0,
00:31:32.344        "io_timeout": 0,
00:31:32.344        "avg_latency_us": 5070.7765262781,
00:31:32.344        "min_latency_us": 2293.76,
00:31:32.344        "max_latency_us": 19972.876190476192
00:31:32.344      }
00:31:32.344    ],
00:31:32.344    "core_count": 1
00:31:32.344  }
00:31:32.604    00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:31:32.604    00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:31:32.604    00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:31:32.604  			| .driver_specific
00:31:32.604  			| .nvme_error
00:31:32.604  			| .status_code
00:31:32.604  			| .command_transient_transport_error'
00:31:32.604    00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:31:32.604   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 198 > 0 ))
00:31:32.604   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3238222
00:31:32.604   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3238222 ']'
00:31:32.604   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3238222
00:31:32.604    00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:31:32.604   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:32.604    00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3238222
00:31:32.604   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:31:32.604   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:31:32.604   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3238222'
00:31:32.604  killing process with pid 3238222
00:31:32.604   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3238222
00:31:32.604  Received shutdown signal, test time was about 2.000000 seconds
00:31:32.604  
00:31:32.604                                                                                                  Latency(us)
00:31:32.604  
[2024-12-09T23:12:48.461Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:32.604  
[2024-12-09T23:12:48.461Z]  ===================================================================================================================
00:31:32.604  
[2024-12-09T23:12:48.461Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:31:32.604   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3238222
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3238691
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3238691 /var/tmp/bperf.sock
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3238691 ']'
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:31:32.865  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:32.865   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:32.865  [2024-12-10 00:12:48.659788] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:32.865  [2024-12-10 00:12:48.659835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3238691 ]
00:31:32.865  I/O size of 131072 is greater than zero copy threshold (65536).
00:31:32.865  Zero copy mechanism will not be used.
00:31:33.124  [2024-12-10 00:12:48.734328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:33.124  [2024-12-10 00:12:48.774753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:33.124   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:33.124   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:31:33.124   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:31:33.124   00:12:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:31:33.383   00:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:31:33.383   00:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:33.383   00:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:33.383   00:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:33.383   00:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:33.383   00:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:33.642  nvme0n1
00:31:33.902   00:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32
00:31:33.902   00:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:33.902   00:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:33.902   00:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:33.902   00:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests
00:31:33.902   00:12:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:31:33.902  I/O size of 131072 is greater than zero copy threshold (65536).
00:31:33.902  Zero copy mechanism will not be used.
00:31:33.902  Running I/O for 2 seconds...
00:31:33.902  [2024-12-10 00:12:49.608759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.608792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.608802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.614952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.614977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.614985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.621235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.621258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.621266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.626845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.626866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.626875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.632341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.632362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.632370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.637994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.638016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.638024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.643465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.643486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.643494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.648817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.648838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.648848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.654175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.654200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.654208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.659381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.659402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.659410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.664501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.664522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.664532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.669635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.669657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.669665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.674861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.674883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.674891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.680185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.680206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.680214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.685485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.685506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.685514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.690859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.690879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.690887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.696050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.696071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.696078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.701263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.701283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.701291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:33.902  [2024-12-10 00:12:49.706629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.902  [2024-12-10 00:12:49.706651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.902  [2024-12-10 00:12:49.706659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:33.903  [2024-12-10 00:12:49.712053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.903  [2024-12-10 00:12:49.712074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.903  [2024-12-10 00:12:49.712082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:33.903  [2024-12-10 00:12:49.717411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.903  [2024-12-10 00:12:49.717431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.903  [2024-12-10 00:12:49.717439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:33.903  [2024-12-10 00:12:49.722631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.903  [2024-12-10 00:12:49.722652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.903  [2024-12-10 00:12:49.722660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:33.903  [2024-12-10 00:12:49.727939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.903  [2024-12-10 00:12:49.727959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.903  [2024-12-10 00:12:49.727967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:33.903  [2024-12-10 00:12:49.733180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.903  [2024-12-10 00:12:49.733200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.903  [2024-12-10 00:12:49.733208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:33.903  [2024-12-10 00:12:49.738377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.903  [2024-12-10 00:12:49.738398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.903  [2024-12-10 00:12:49.738405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:33.903  [2024-12-10 00:12:49.743816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.903  [2024-12-10 00:12:49.743837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.903  [2024-12-10 00:12:49.743849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:33.903  [2024-12-10 00:12:49.749056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.903  [2024-12-10 00:12:49.749077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.903  [2024-12-10 00:12:49.749085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:33.903  [2024-12-10 00:12:49.754258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:33.903  [2024-12-10 00:12:49.754294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:33.903  [2024-12-10 00:12:49.754302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.760056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.760077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.760085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.766959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.766981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.766989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.774336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.774357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.774365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.781360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.781382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.781390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.789359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.789382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.789390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.796922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.796944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.796952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.803708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.803735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.803743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.809660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.809682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.809691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.816796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.816818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.816827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.823136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.823158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.823171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.830335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.830357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.830365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.837772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.837793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.837802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.841369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.841390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.841397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.848413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.164  [2024-12-10 00:12:49.848435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.164  [2024-12-10 00:12:49.848443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.164  [2024-12-10 00:12:49.855818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.855840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.855854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.861810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.861832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.861840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.867293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.867314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.867322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.872882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.872904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.872912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.878518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.878540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.878548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.884364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.884385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.884393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.889902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.889924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.889932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.895214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.895235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.895244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.900625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.900646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.900654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.906152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.906183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.906191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.911704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.911725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.911733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.917239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.917259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.917267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.922895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.922916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.922924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.928601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.928622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.928630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.934225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.934246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.934253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.939792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.939812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.939820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.945336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.945357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.945365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.165  [2024-12-10 00:12:49.950945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.165  [2024-12-10 00:12:49.950965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.165  [2024-12-10 00:12:49.950972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.166  [2024-12-10 00:12:49.956323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.166  [2024-12-10 00:12:49.956342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.166  [2024-12-10 00:12:49.956350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.166  [2024-12-10 00:12:49.961679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.166  [2024-12-10 00:12:49.961700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.166  [2024-12-10 00:12:49.961707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.166  [2024-12-10 00:12:49.967029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.166  [2024-12-10 00:12:49.967049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.166  [2024-12-10 00:12:49.967057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.166  [2024-12-10 00:12:49.972365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.166  [2024-12-10 00:12:49.972386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.166  [2024-12-10 00:12:49.972393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.166  [2024-12-10 00:12:49.977631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.166  [2024-12-10 00:12:49.977652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.166  [2024-12-10 00:12:49.977660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.166  [2024-12-10 00:12:49.983061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.166  [2024-12-10 00:12:49.983081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.166  [2024-12-10 00:12:49.983089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.166  [2024-12-10 00:12:49.988448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.166  [2024-12-10 00:12:49.988469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.166  [2024-12-10 00:12:49.988476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.166  [2024-12-10 00:12:49.993639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.166  [2024-12-10 00:12:49.993659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.166  [2024-12-10 00:12:49.993667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.166  [2024-12-10 00:12:49.998834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.166  [2024-12-10 00:12:49.998855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.166  [2024-12-10 00:12:49.998866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.166  [2024-12-10 00:12:50.003986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.166  [2024-12-10 00:12:50.004007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.166  [2024-12-10 00:12:50.004015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.166  [2024-12-10 00:12:50.009296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.166  [2024-12-10 00:12:50.009333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.166  [2024-12-10 00:12:50.009346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.166  [2024-12-10 00:12:50.016159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.166  [2024-12-10 00:12:50.016191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.166  [2024-12-10 00:12:50.016200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.426  [2024-12-10 00:12:50.021887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.426  [2024-12-10 00:12:50.021908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.426  [2024-12-10 00:12:50.021917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.426  [2024-12-10 00:12:50.027490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.426  [2024-12-10 00:12:50.027510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.426  [2024-12-10 00:12:50.027519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.426  [2024-12-10 00:12:50.033115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.426  [2024-12-10 00:12:50.033135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.426  [2024-12-10 00:12:50.033143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.426  [2024-12-10 00:12:50.039538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.426  [2024-12-10 00:12:50.039565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.426  [2024-12-10 00:12:50.039575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.426  [2024-12-10 00:12:50.045346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.426  [2024-12-10 00:12:50.045369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.426  [2024-12-10 00:12:50.045378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.426  [2024-12-10 00:12:50.050855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.426  [2024-12-10 00:12:50.050881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.426  [2024-12-10 00:12:50.050889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.426  [2024-12-10 00:12:50.056625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.426  [2024-12-10 00:12:50.056646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.426  [2024-12-10 00:12:50.056655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.426  [2024-12-10 00:12:50.061759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.426  [2024-12-10 00:12:50.061781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.426  [2024-12-10 00:12:50.061789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.426  [2024-12-10 00:12:50.067590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.426  [2024-12-10 00:12:50.067612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.426  [2024-12-10 00:12:50.067621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.073537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.073584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.073609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.079763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.079785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.079793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.086100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.086123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.086131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.093108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.093130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.093138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.099594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.099616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.099629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.104880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.104901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.104910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.110253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.110275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.110283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.115831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.115852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.115861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.121227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.121248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.121256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.126805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.126826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.126835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.132325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.132347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.132355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.138179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.138201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.138210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.143658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.143679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.143687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.149123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.149150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.149158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.154555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.154576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.154585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.159947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.159969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.159976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.165472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.165494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.165503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.170891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.170912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.170921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.176281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.176303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.176312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.181777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.181799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.181807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.187443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.427  [2024-12-10 00:12:50.187465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.427  [2024-12-10 00:12:50.187473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.427  [2024-12-10 00:12:50.193022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.193044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.193052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.198578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.198600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.198609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.204116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.204137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.204146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.209495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.209517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.209526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.214864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.214886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.214895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.220034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.220055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.220063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.225223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.225244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.225252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.230439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.230460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.230468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.235795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.235818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.235826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.241119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.241140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.241152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.246510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.246532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.246540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.251823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.251844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.251851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.257246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.257267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.257275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.262488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.262509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.262517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.267923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.267945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.267953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.273625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.273647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.273656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.428  [2024-12-10 00:12:50.279341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.428  [2024-12-10 00:12:50.279364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.428  [2024-12-10 00:12:50.279374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.692  [2024-12-10 00:12:50.285403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.285427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.285435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.292788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.292815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.292824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.301038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.301061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.301070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.308656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.308678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.308687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.316765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.316787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.316795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.324162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.324191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.324200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.332498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.332522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.332531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.340008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.340031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.340040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.347448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.347470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.347479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.355683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.355706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.355715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.362994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.363018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.363026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.370532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.370555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.370563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.378104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.378126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.378136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.385753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.385776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.385785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.392944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.392967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.392975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.399476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.399498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.399507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.406410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.693  [2024-12-10 00:12:50.406433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.693  [2024-12-10 00:12:50.406441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.693  [2024-12-10 00:12:50.414245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.414268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.414277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.421668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.421691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.421703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.428178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.428200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.428209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.431265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.431286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.431294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.437505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.437527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.437535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.443604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.443626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.443634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.449096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.449117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.449124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.454491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.454512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.454520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.460160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.460187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.460195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.465969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.465989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.465998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.472390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.472411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.472419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.480344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.480366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.480375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.486708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.486730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.486738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.493104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.493125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.493134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.499253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.499275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.499284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.505349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.505370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.694  [2024-12-10 00:12:50.505378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.694  [2024-12-10 00:12:50.510329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.694  [2024-12-10 00:12:50.510352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.695  [2024-12-10 00:12:50.510360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.695  [2024-12-10 00:12:50.515698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.695  [2024-12-10 00:12:50.515720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.695  [2024-12-10 00:12:50.515729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.695  [2024-12-10 00:12:50.521007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.695  [2024-12-10 00:12:50.521029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.695  [2024-12-10 00:12:50.521041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.695  [2024-12-10 00:12:50.526489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.695  [2024-12-10 00:12:50.526513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.695  [2024-12-10 00:12:50.526522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.695  [2024-12-10 00:12:50.531818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.695  [2024-12-10 00:12:50.531839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.695  [2024-12-10 00:12:50.531847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.695  [2024-12-10 00:12:50.537207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.695  [2024-12-10 00:12:50.537228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.695  [2024-12-10 00:12:50.537237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.695  [2024-12-10 00:12:50.542544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.695  [2024-12-10 00:12:50.542565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.695  [2024-12-10 00:12:50.542573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.695  [2024-12-10 00:12:50.547960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.695  [2024-12-10 00:12:50.547982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.695  [2024-12-10 00:12:50.547990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.958  [2024-12-10 00:12:50.553378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.958  [2024-12-10 00:12:50.553399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.958  [2024-12-10 00:12:50.553407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.958  [2024-12-10 00:12:50.558706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.958  [2024-12-10 00:12:50.558727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.958  [2024-12-10 00:12:50.558735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.958  [2024-12-10 00:12:50.564147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.958  [2024-12-10 00:12:50.564182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.958  [2024-12-10 00:12:50.564191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.958  [2024-12-10 00:12:50.569526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.958  [2024-12-10 00:12:50.569552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.958  [2024-12-10 00:12:50.569561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.958  [2024-12-10 00:12:50.574876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.958  [2024-12-10 00:12:50.574896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.958  [2024-12-10 00:12:50.574904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.958  [2024-12-10 00:12:50.580292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.958  [2024-12-10 00:12:50.580314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.958  [2024-12-10 00:12:50.580322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.958  [2024-12-10 00:12:50.584963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.958  [2024-12-10 00:12:50.584985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.958  [2024-12-10 00:12:50.584993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.958  [2024-12-10 00:12:50.590197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.958  [2024-12-10 00:12:50.590219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.958  [2024-12-10 00:12:50.590227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.958  [2024-12-10 00:12:50.595509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.958  [2024-12-10 00:12:50.595530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.958  [2024-12-10 00:12:50.595538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.958       5230.00 IOPS,   653.75 MiB/s
[2024-12-09T23:12:50.815Z] [2024-12-10 00:12:50.601687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.958  [2024-12-10 00:12:50.601708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.958  [2024-12-10 00:12:50.601717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.958  [2024-12-10 00:12:50.606899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.958  [2024-12-10 00:12:50.606921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.958  [2024-12-10 00:12:50.606929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.958  [2024-12-10 00:12:50.612049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.958  [2024-12-10 00:12:50.612070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.612078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.617029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.617051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.617060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.622278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.622299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.622308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.627535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.627556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.627565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.633062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.633084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.633091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.638546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.638568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.638576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.643952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.643973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.643982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.649963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.649985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.649993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.657698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.657721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.657729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.665192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.665213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.665224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.672494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.672518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.672527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.680724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.680748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.680756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.688127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.688149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.688158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.692288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.692311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.692320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.697787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.697808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.697816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.703116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.703139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.703147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.708416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.708437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.708446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.713898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.713920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.713928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.719609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.719635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.719643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.725220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.725241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.725249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.730576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.730599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.730607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.736006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.736027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.959  [2024-12-10 00:12:50.736036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.959  [2024-12-10 00:12:50.741832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.959  [2024-12-10 00:12:50.741853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.960  [2024-12-10 00:12:50.741862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.960  [2024-12-10 00:12:50.749544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.960  [2024-12-10 00:12:50.749566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.960  [2024-12-10 00:12:50.749575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.960  [2024-12-10 00:12:50.757592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.960  [2024-12-10 00:12:50.757614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.960  [2024-12-10 00:12:50.757623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.960  [2024-12-10 00:12:50.765677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.960  [2024-12-10 00:12:50.765699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.960  [2024-12-10 00:12:50.765708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.960  [2024-12-10 00:12:50.773253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.960  [2024-12-10 00:12:50.773276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.960  [2024-12-10 00:12:50.773288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.960  [2024-12-10 00:12:50.780903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.960  [2024-12-10 00:12:50.780925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.960  [2024-12-10 00:12:50.780934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:34.960  [2024-12-10 00:12:50.788306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.960  [2024-12-10 00:12:50.788328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.960  [2024-12-10 00:12:50.788336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:34.960  [2024-12-10 00:12:50.796146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.960  [2024-12-10 00:12:50.796173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.960  [2024-12-10 00:12:50.796182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:34.960  [2024-12-10 00:12:50.803615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.960  [2024-12-10 00:12:50.803637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.960  [2024-12-10 00:12:50.803645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:34.960  [2024-12-10 00:12:50.811114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:34.960  [2024-12-10 00:12:50.811136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:34.960  [2024-12-10 00:12:50.811145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.818514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.818536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.220  [2024-12-10 00:12:50.818544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.825935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.825958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.220  [2024-12-10 00:12:50.825966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.834071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.834092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.220  [2024-12-10 00:12:50.834100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.840757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.840784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.220  [2024-12-10 00:12:50.840793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.847659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.847680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.220  [2024-12-10 00:12:50.847688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.855236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.855257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.220  [2024-12-10 00:12:50.855265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.862542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.862564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.220  [2024-12-10 00:12:50.862573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.870614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.870635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.220  [2024-12-10 00:12:50.870644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.878088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.878108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.220  [2024-12-10 00:12:50.878117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.885474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.885497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.220  [2024-12-10 00:12:50.885506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.893329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.893350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.220  [2024-12-10 00:12:50.893359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.901659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.901681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.220  [2024-12-10 00:12:50.901690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.909316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.909338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.220  [2024-12-10 00:12:50.909347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.220  [2024-12-10 00:12:50.916323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.220  [2024-12-10 00:12:50.916345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.916354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:50.924304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:50.924326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.924334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:50.932001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:50.932021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.932029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:50.940111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:50.940133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.940141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:50.946820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:50.946841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.946850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:50.953584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:50.953606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.953615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:50.960448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:50.960470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.960478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:50.965996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:50.966018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.966029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:50.971283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:50.971304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.971311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:50.976679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:50.976700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.976708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:50.982210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:50.982231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.982239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:50.987563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:50.987584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.987591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:50.992927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:50.992947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.992955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:50.998332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:50.998352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:50.998360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:51.003777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:51.003798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:51.003805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:51.008987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:51.009008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:51.009016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:51.014204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:51.014228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:51.014235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:51.019364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:51.019385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:51.019393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:51.024478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:51.024498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:51.024505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:51.029579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:51.029599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:51.029606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:51.034700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:51.034721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.221  [2024-12-10 00:12:51.034729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.221  [2024-12-10 00:12:51.039822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.221  [2024-12-10 00:12:51.039843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.222  [2024-12-10 00:12:51.039850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.222  [2024-12-10 00:12:51.045084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.222  [2024-12-10 00:12:51.045105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.222  [2024-12-10 00:12:51.045112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.222  [2024-12-10 00:12:51.050344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.222  [2024-12-10 00:12:51.050364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.222  [2024-12-10 00:12:51.050372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.222  [2024-12-10 00:12:51.055782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.222  [2024-12-10 00:12:51.055802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.222  [2024-12-10 00:12:51.055810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.222  [2024-12-10 00:12:51.061124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.222  [2024-12-10 00:12:51.061144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.222  [2024-12-10 00:12:51.061152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.222  [2024-12-10 00:12:51.066502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.222  [2024-12-10 00:12:51.066522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.222  [2024-12-10 00:12:51.066530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.222  [2024-12-10 00:12:51.072078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.222  [2024-12-10 00:12:51.072098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.222  [2024-12-10 00:12:51.072107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.077488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.077509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.077518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.082743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.082764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.082772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.088159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.088186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.088194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.093426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.093447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.093455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.098843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.098863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.098871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.104218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.104238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.104248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.109642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.109663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.109670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.115179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.115199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.115207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.120660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.120681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.120689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.126129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.126150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.126158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.131436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.131456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.131464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.136650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.136671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.136680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.141757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.141779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.141787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.147003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.147023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.147031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.152328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.152349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.152357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.157568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.157589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.157597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.162770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.162790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.162798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.168017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.168037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.168044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.173158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.173186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.173194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.178770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.178790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.178798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.184429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.184450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.184458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.190105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.190125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.190132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.195441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.195463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.195474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.201122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.201143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.201151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.206757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.206777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.206786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.212382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.212403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.212411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.217288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.217309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.217317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.222651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.222672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.222680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.227352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.227372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.227380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.230371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.230392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.230401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.235510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.235530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.235538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.240630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.486  [2024-12-10 00:12:51.240654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.486  [2024-12-10 00:12:51.240662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.486  [2024-12-10 00:12:51.245913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.245932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.245940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.251076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.251097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.251104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.256440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.256461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.256468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.261638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.261659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.261666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.266738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.266758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.266767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.271832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.271853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.271861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.277059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.277080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.277088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.282317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.282337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.282346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.287597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.287617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.287625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.292435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.292457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.292465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.297550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.297570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.297578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.302752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.302773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.302781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.307977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.307996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.308004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.313246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.313267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.313274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.318305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.318334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.318342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.323535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.323556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.323564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.328639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.328660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.328673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.333937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.333957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.333965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.487  [2024-12-10 00:12:51.339099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.487  [2024-12-10 00:12:51.339119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.487  [2024-12-10 00:12:51.339127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.746  [2024-12-10 00:12:51.344273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.746  [2024-12-10 00:12:51.344293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.746  [2024-12-10 00:12:51.344301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.746  [2024-12-10 00:12:51.349390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.746  [2024-12-10 00:12:51.349411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.746  [2024-12-10 00:12:51.349419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.746  [2024-12-10 00:12:51.355103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.746  [2024-12-10 00:12:51.355123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.746  [2024-12-10 00:12:51.355131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.360532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.360552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.360560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.365991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.366012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.366020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.371358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.371378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.371386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.376842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.376868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.376876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.383078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.383099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.383107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.389813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.389835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.389843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.397442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.397464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.397472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.404975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.404997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.405005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.412506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.412527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.412535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.419952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.419973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.419981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.428128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.428150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.428158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.435707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.435728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.435736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.443443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.443465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.443473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.451032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.451053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.451061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.458676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.458697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.458705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.466604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.466626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.466634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.474190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.474211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.474219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.482087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.482109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.482118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.489736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.489757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.489765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.497174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.497195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.497203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.500639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.500663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.500671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.506583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.747  [2024-12-10 00:12:51.506604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.747  [2024-12-10 00:12:51.506612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.747  [2024-12-10 00:12:51.512328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.512348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.512357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.517954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.517974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.517982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.523545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.523567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.523576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.529382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.529403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.529411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.534908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.534929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.534938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.540136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.540157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.540171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.545414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.545434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.545442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.550614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.550636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.550644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.555941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.555962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.555970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.561256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.561275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.561282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.566535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.566555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.566562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.572085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.572104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.572112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.577326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.577346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.577354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.582813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.582834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.582842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.588346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.588367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.588374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.593948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.593968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.593979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:35.748  [2024-12-10 00:12:51.599498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x22b76a0)
00:31:35.748  [2024-12-10 00:12:51.599518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:35.748  [2024-12-10 00:12:51.599526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:36.008       5208.50 IOPS,   651.06 MiB/s
00:31:36.008                                                                                                  Latency(us)
00:31:36.008  
[2024-12-09T23:12:51.865Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:36.008  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072)
00:31:36.008  	 nvme0n1             :       2.00    5210.65     651.33       0.00     0.00    3068.15     608.55   15541.39
00:31:36.008  
[2024-12-09T23:12:51.865Z]  ===================================================================================================================
00:31:36.008  
[2024-12-09T23:12:51.865Z]  Total                       :               5210.65     651.33       0.00     0.00    3068.15     608.55   15541.39
00:31:36.008  {
00:31:36.008    "results": [
00:31:36.008      {
00:31:36.008        "job": "nvme0n1",
00:31:36.008        "core_mask": "0x2",
00:31:36.008        "workload": "randread",
00:31:36.008        "status": "finished",
00:31:36.008        "queue_depth": 16,
00:31:36.008        "io_size": 131072,
00:31:36.008        "runtime": 2.002246,
00:31:36.008        "iops": 5210.648441799859,
00:31:36.008        "mibps": 651.3310552249824,
00:31:36.008        "io_failed": 0,
00:31:36.008        "io_timeout": 0,
00:31:36.008        "avg_latency_us": 3068.1525160548263,
00:31:36.008        "min_latency_us": 608.5485714285714,
00:31:36.008        "max_latency_us": 15541.394285714287
00:31:36.008      }
00:31:36.008    ],
00:31:36.008    "core_count": 1
00:31:36.008  }
00:31:36.008    00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:31:36.008    00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:31:36.008    00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:31:36.008  			| .driver_specific
00:31:36.008  			| .nvme_error
00:31:36.008  			| .status_code
00:31:36.008  			| .command_transient_transport_error'
00:31:36.008    00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:31:36.008   00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 337 > 0 ))
00:31:36.008   00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3238691
00:31:36.008   00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3238691 ']'
00:31:36.008   00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3238691
00:31:36.008    00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:31:36.008   00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:36.008    00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3238691
00:31:36.267   00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:31:36.267   00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:31:36.267   00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3238691'
00:31:36.267  killing process with pid 3238691
00:31:36.267   00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3238691
00:31:36.267  Received shutdown signal, test time was about 2.000000 seconds
00:31:36.267  
00:31:36.268                                                                                                  Latency(us)
00:31:36.268  
[2024-12-09T23:12:52.125Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:36.268  
[2024-12-09T23:12:52.125Z]  ===================================================================================================================
00:31:36.268  
[2024-12-09T23:12:52.125Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:31:36.268   00:12:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3238691
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3239355
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3239355 /var/tmp/bperf.sock
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3239355 ']'
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:31:36.268  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:36.268   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:36.268  [2024-12-10 00:12:52.078738] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:36.268  [2024-12-10 00:12:52.078791] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3239355 ]
00:31:36.527  [2024-12-10 00:12:52.151812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:36.527  [2024-12-10 00:12:52.190200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:36.527   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:36.527   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:31:36.527   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:31:36.527   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:31:36.785   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:31:36.785   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:36.785   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:36.785   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:36.785   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:36.785   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:37.044  nvme0n1
00:31:37.044   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256
00:31:37.044   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:37.044   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:37.044   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:37.044   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests
00:31:37.044   00:12:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:31:37.044  Running I/O for 2 seconds...
00:31:37.044  [2024-12-10 00:12:52.870533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee1f80
00:31:37.044  [2024-12-10 00:12:52.871466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.044  [2024-12-10 00:12:52.871494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:37.044  [2024-12-10 00:12:52.880733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee4de8
00:31:37.044  [2024-12-10 00:12:52.881967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:89 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.044  [2024-12-10 00:12:52.881989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:37.044  [2024-12-10 00:12:52.889563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eeaef0
00:31:37.044  [2024-12-10 00:12:52.890492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.044  [2024-12-10 00:12:52.890514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:31:37.044  [2024-12-10 00:12:52.897923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef6cc8
00:31:37.044  [2024-12-10 00:12:52.898918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.044  [2024-12-10 00:12:52.898937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:31:37.303  [2024-12-10 00:12:52.907514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef2948
00:31:37.303  [2024-12-10 00:12:52.908633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.303  [2024-12-10 00:12:52.908652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:31:37.303  [2024-12-10 00:12:52.916660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee8d30
00:31:37.303  [2024-12-10 00:12:52.917746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.303  [2024-12-10 00:12:52.917764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0
00:31:37.303  [2024-12-10 00:12:52.924916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef2510
00:31:37.303  [2024-12-10 00:12:52.925753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.303  [2024-12-10 00:12:52.925777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:31:37.303  [2024-12-10 00:12:52.933956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee9168
00:31:37.303  [2024-12-10 00:12:52.934745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.303  [2024-12-10 00:12:52.934763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:31:37.303  [2024-12-10 00:12:52.945463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efe2e8
00:31:37.303  [2024-12-10 00:12:52.946965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:52.946983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:52.952007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eebb98
00:31:37.304  [2024-12-10 00:12:52.952777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:52.952795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:52.963092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee99d8
00:31:37.304  [2024-12-10 00:12:52.964384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:52.964404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:52.971376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef6458
00:31:37.304  [2024-12-10 00:12:52.972612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:52.972631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:52.981334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef0ff8
00:31:37.304  [2024-12-10 00:12:52.982404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:52.982423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:52.990625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee5658
00:31:37.304  [2024-12-10 00:12:52.992050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:52.992068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:52.997116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef0ff8
00:31:37.304  [2024-12-10 00:12:52.997814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:52.997832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.006248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efda78
00:31:37.304  [2024-12-10 00:12:53.006954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.006972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.016603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee3060
00:31:37.304  [2024-12-10 00:12:53.017553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.017573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.025971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee6b70
00:31:37.304  [2024-12-10 00:12:53.026946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.026964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.035478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee95a0
00:31:37.304  [2024-12-10 00:12:53.036752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.036770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.043655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef1430
00:31:37.304  [2024-12-10 00:12:53.044918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.044936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.051378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eedd58
00:31:37.304  [2024-12-10 00:12:53.051989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:16680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.052007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.060374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee6300
00:31:37.304  [2024-12-10 00:12:53.060964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.060982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.069721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee0630
00:31:37.304  [2024-12-10 00:12:53.070386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.070404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.078976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee12d8
00:31:37.304  [2024-12-10 00:12:53.079588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.079607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.088648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee1f80
00:31:37.304  [2024-12-10 00:12:53.089596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.089616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.097702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef7538
00:31:37.304  [2024-12-10 00:12:53.098327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.098345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.105950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eeb760
00:31:37.304  [2024-12-10 00:12:53.106664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.106683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.117028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee9168
00:31:37.304  [2024-12-10 00:12:53.118203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.118222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.126420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016edf988
00:31:37.304  [2024-12-10 00:12:53.127608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.127626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.135954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef57b0
00:31:37.304  [2024-12-10 00:12:53.137322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.137340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0
00:31:37.304  [2024-12-10 00:12:53.142469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef0350
00:31:37.304  [2024-12-10 00:12:53.143096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.304  [2024-12-10 00:12:53.143114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:31:37.305  [2024-12-10 00:12:53.151866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee9168
00:31:37.305  [2024-12-10 00:12:53.152610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.305  [2024-12-10 00:12:53.152628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0
00:31:37.564  [2024-12-10 00:12:53.161683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eefae0
00:31:37.564  [2024-12-10 00:12:53.162427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.564  [2024-12-10 00:12:53.162449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:31:37.564  [2024-12-10 00:12:53.170881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee88f8
00:31:37.564  [2024-12-10 00:12:53.171503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.564  [2024-12-10 00:12:53.171523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:31:37.564  [2024-12-10 00:12:53.181690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef0788
00:31:37.564  [2024-12-10 00:12:53.183194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.564  [2024-12-10 00:12:53.183212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:31:37.564  [2024-12-10 00:12:53.188033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee8d30
00:31:37.564  [2024-12-10 00:12:53.188741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.564  [2024-12-10 00:12:53.188759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:31:37.564  [2024-12-10 00:12:53.197620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eed920
00:31:37.564  [2024-12-10 00:12:53.198478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.198498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.207560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016edfdc0
00:31:37.565  [2024-12-10 00:12:53.208610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.208629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.217629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee73e0
00:31:37.565  [2024-12-10 00:12:53.219208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.219226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.224228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eeb328
00:31:37.565  [2024-12-10 00:12:53.225045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.225063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.233597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee99d8
00:31:37.565  [2024-12-10 00:12:53.234562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.234581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.242741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee12d8
00:31:37.565  [2024-12-10 00:12:53.243608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.243627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.253440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016edf988
00:31:37.565  [2024-12-10 00:12:53.254902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:2243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.254920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.262865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee1f80
00:31:37.565  [2024-12-10 00:12:53.264347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:7147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.264365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:006b p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.270597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efb048
00:31:37.565  [2024-12-10 00:12:53.271582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.271602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.279563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee84c0
00:31:37.565  [2024-12-10 00:12:53.280649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:6320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.280668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.288571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee84c0
00:31:37.565  [2024-12-10 00:12:53.289681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.289700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.297604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee84c0
00:31:37.565  [2024-12-10 00:12:53.298691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:2874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.298710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.306565] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee84c0
00:31:37.565  [2024-12-10 00:12:53.307654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.307673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.315584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee9e10
00:31:37.565  [2024-12-10 00:12:53.316588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:20087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.316608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.323922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef5be8
00:31:37.565  [2024-12-10 00:12:53.324898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.324917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.333341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee6300
00:31:37.565  [2024-12-10 00:12:53.334390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.334408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.342696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef0bc0
00:31:37.565  [2024-12-10 00:12:53.344006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.344024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.352136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efe720
00:31:37.565  [2024-12-10 00:12:53.353495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.353513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.361503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef35f0
00:31:37.565  [2024-12-10 00:12:53.363005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.363023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.368657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efc998
00:31:37.565  [2024-12-10 00:12:53.369692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.369710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.377715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee23b8
00:31:37.565  [2024-12-10 00:12:53.378674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.565  [2024-12-10 00:12:53.378693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0
00:31:37.565  [2024-12-10 00:12:53.386521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efc560
00:31:37.565  [2024-12-10 00:12:53.387238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.566  [2024-12-10 00:12:53.387257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0
00:31:37.566  [2024-12-10 00:12:53.395579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efe720
00:31:37.566  [2024-12-10 00:12:53.396151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.566  [2024-12-10 00:12:53.396181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0
00:31:37.566  [2024-12-10 00:12:53.404998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee99d8
00:31:37.566  [2024-12-10 00:12:53.405688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.566  [2024-12-10 00:12:53.405707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0
00:31:37.566  [2024-12-10 00:12:53.413954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee4de8
00:31:37.566  [2024-12-10 00:12:53.414909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.566  [2024-12-10 00:12:53.414927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:31:37.824  [2024-12-10 00:12:53.423256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016edf988
00:31:37.824  [2024-12-10 00:12:53.424219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.824  [2024-12-10 00:12:53.424239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:31:37.824  [2024-12-10 00:12:53.431683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee6b70
00:31:37.824  [2024-12-10 00:12:53.432909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.824  [2024-12-10 00:12:53.432928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:31:37.824  [2024-12-10 00:12:53.440787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efa3a0
00:31:37.824  [2024-12-10 00:12:53.441781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.824  [2024-12-10 00:12:53.441799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.449594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee01f8
00:31:37.825  [2024-12-10 00:12:53.450862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.450880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.457902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eeb760
00:31:37.825  [2024-12-10 00:12:53.458596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.458616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.466862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efc128
00:31:37.825  [2024-12-10 00:12:53.467531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.467551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.475834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef96f8
00:31:37.825  [2024-12-10 00:12:53.476494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.476512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.484901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eec408
00:31:37.825  [2024-12-10 00:12:53.485589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:25042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.485608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.493854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee8d30
00:31:37.825  [2024-12-10 00:12:53.494552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.494571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.502274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eed0b0
00:31:37.825  [2024-12-10 00:12:53.502918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.502937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.513230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efcdd0
00:31:37.825  [2024-12-10 00:12:53.514252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.514271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.522348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eedd58
00:31:37.825  [2024-12-10 00:12:53.523401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.523419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.531465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef9f68
00:31:37.825  [2024-12-10 00:12:53.532466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.532484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.540685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eee5c8
00:31:37.825  [2024-12-10 00:12:53.541819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.541838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.548305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efb8b8
00:31:37.825  [2024-12-10 00:12:53.548842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:8240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.548860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.557387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efb048
00:31:37.825  [2024-12-10 00:12:53.558169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.558187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.566365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef0788
00:31:37.825  [2024-12-10 00:12:53.567148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.567170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.575336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eef270
00:31:37.825  [2024-12-10 00:12:53.576110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.576127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.584293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef8e88
00:31:37.825  [2024-12-10 00:12:53.585064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.585082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.593241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efb480
00:31:37.825  [2024-12-10 00:12:53.594012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.594032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.603412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eee5c8
00:31:37.825  [2024-12-10 00:12:53.604555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.604573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.611183] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efa7d8
00:31:37.825  [2024-12-10 00:12:53.611878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.611896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.619537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016edf550
00:31:37.825  [2024-12-10 00:12:53.620199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.620233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.629509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efb480
00:31:37.825  [2024-12-10 00:12:53.630408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.825  [2024-12-10 00:12:53.630429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:37.825  [2024-12-10 00:12:53.638693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef8e88
00:31:37.825  [2024-12-10 00:12:53.639595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.826  [2024-12-10 00:12:53.639614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:37.826  [2024-12-10 00:12:53.647793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eef270
00:31:37.826  [2024-12-10 00:12:53.648711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.826  [2024-12-10 00:12:53.648729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:37.826  [2024-12-10 00:12:53.656896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef0788
00:31:37.826  [2024-12-10 00:12:53.657863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.826  [2024-12-10 00:12:53.657882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:37.826  [2024-12-10 00:12:53.665957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef7da8
00:31:37.826  [2024-12-10 00:12:53.666857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.826  [2024-12-10 00:12:53.666876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:37.826  [2024-12-10 00:12:53.675051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eebfd0
00:31:37.826  [2024-12-10 00:12:53.675956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:37.826  [2024-12-10 00:12:53.675975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:38.085  [2024-12-10 00:12:53.684247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee88f8
00:31:38.085  [2024-12-10 00:12:53.685176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.085  [2024-12-10 00:12:53.685196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:38.085  [2024-12-10 00:12:53.693359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eed920
00:31:38.085  [2024-12-10 00:12:53.694239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.085  [2024-12-10 00:12:53.694258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:38.085  [2024-12-10 00:12:53.702376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eeb760
00:31:38.085  [2024-12-10 00:12:53.703271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.085  [2024-12-10 00:12:53.703290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:38.085  [2024-12-10 00:12:53.711690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eefae0
00:31:38.085  [2024-12-10 00:12:53.712657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:9881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.085  [2024-12-10 00:12:53.712680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:38.085  [2024-12-10 00:12:53.721014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee7c50
00:31:38.085  [2024-12-10 00:12:53.722149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.085  [2024-12-10 00:12:53.722172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:38.085  [2024-12-10 00:12:53.728417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef1ca0
00:31:38.085  [2024-12-10 00:12:53.729094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.085  [2024-12-10 00:12:53.729112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:31:38.085  [2024-12-10 00:12:53.737337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016edf550
00:31:38.085  [2024-12-10 00:12:53.737988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.085  [2024-12-10 00:12:53.738006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:31:38.085  [2024-12-10 00:12:53.746297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efb480
00:31:38.085  [2024-12-10 00:12:53.746945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:24826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.085  [2024-12-10 00:12:53.746963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:31:38.085  [2024-12-10 00:12:53.755278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee2c28
00:31:38.085  [2024-12-10 00:12:53.755926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.085  [2024-12-10 00:12:53.755945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:31:38.085  [2024-12-10 00:12:53.764280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef5378
00:31:38.085  [2024-12-10 00:12:53.764938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.764956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.773231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee5a90
00:31:38.086  [2024-12-10 00:12:53.773865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.773883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.782466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef9b30
00:31:38.086  [2024-12-10 00:12:53.782891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:25176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.782910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.791584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef0350
00:31:38.086  [2024-12-10 00:12:53.792382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.792401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.801741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eeee38
00:31:38.086  [2024-12-10 00:12:53.802936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.802954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0043 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.810086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eeaef0
00:31:38.086  [2024-12-10 00:12:53.810945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.810963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.818953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efc560
00:31:38.086  [2024-12-10 00:12:53.819817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.819835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.827933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ede8a8
00:31:38.086  [2024-12-10 00:12:53.828827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.828845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.836922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef4b08
00:31:38.086  [2024-12-10 00:12:53.837819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.837837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.845889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef7970
00:31:38.086  [2024-12-10 00:12:53.846781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.846799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.855086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eed4e8
00:31:38.086  [2024-12-10 00:12:53.855761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.855779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:31:38.086      28147.00 IOPS,   109.95 MiB/s
[2024-12-09T23:12:53.943Z] [2024-12-10 00:12:53.864139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef35f0
00:31:38.086  [2024-12-10 00:12:53.865190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.865212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.873051] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef35f0
00:31:38.086  [2024-12-10 00:12:53.874076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.874093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.882132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef35f0
00:31:38.086  [2024-12-10 00:12:53.883145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.883163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.890586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eef270
00:31:38.086  [2024-12-10 00:12:53.891584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.891601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.899775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee73e0
00:31:38.086  [2024-12-10 00:12:53.900346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.900365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.908910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016edfdc0
00:31:38.086  [2024-12-10 00:12:53.909856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.909874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.917845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef9b30
00:31:38.086  [2024-12-10 00:12:53.918797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.918815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.926883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eeaef0
00:31:38.086  [2024-12-10 00:12:53.927670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.927688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.086  [2024-12-10 00:12:53.935837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efcdd0
00:31:38.086  [2024-12-10 00:12:53.936723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.086  [2024-12-10 00:12:53.936741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:53.945043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eedd58
00:31:38.347  [2024-12-10 00:12:53.945946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:53.945964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:53.954246] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef8e88
00:31:38.347  [2024-12-10 00:12:53.955161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:53.955183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:53.963236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eef270
00:31:38.347  [2024-12-10 00:12:53.964128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:53.964146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:53.973340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee4140
00:31:38.347  [2024-12-10 00:12:53.974683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:53.974702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:53.982503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016edf550
00:31:38.347  [2024-12-10 00:12:53.983874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:53.983892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0063 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:53.988647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee7818
00:31:38.347  [2024-12-10 00:12:53.989256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:11593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:53.989274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:53.998630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef7970
00:31:38.347  [2024-12-10 00:12:53.999394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:53.999412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:54.007843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eef270
00:31:38.347  [2024-12-10 00:12:54.008720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:54.008738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:54.016351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee38d0
00:31:38.347  [2024-12-10 00:12:54.017224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:54.017242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:54.026420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef4f40
00:31:38.347  [2024-12-10 00:12:54.027482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:54.027500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:54.035387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee84c0
00:31:38.347  [2024-12-10 00:12:54.036384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:54.036402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:54.044327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef2510
00:31:38.347  [2024-12-10 00:12:54.045318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:54.045336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:54.053289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efb048
00:31:38.347  [2024-12-10 00:12:54.054298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:54.054317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:38.347  [2024-12-10 00:12:54.062261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef5378
00:31:38.347  [2024-12-10 00:12:54.063294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.347  [2024-12-10 00:12:54.063312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.071227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef46d0
00:31:38.348  [2024-12-10 00:12:54.072243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.072262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.081292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee95a0
00:31:38.348  [2024-12-10 00:12:54.082763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.082782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.087717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eebfd0
00:31:38.348  [2024-12-10 00:12:54.088336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.088355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.096308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eefae0
00:31:38.348  [2024-12-10 00:12:54.096926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.096946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.105734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee84c0
00:31:38.348  [2024-12-10 00:12:54.106519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.106537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.116908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef57b0
00:31:38.348  [2024-12-10 00:12:54.118134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.118153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.126357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef0788
00:31:38.348  [2024-12-10 00:12:54.127683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.127701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.135773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ede038
00:31:38.348  [2024-12-10 00:12:54.137222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.137240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.142531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef92c0
00:31:38.348  [2024-12-10 00:12:54.143330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:16654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.143348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.153675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee38d0
00:31:38.348  [2024-12-10 00:12:54.154921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.154939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.163113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efe720
00:31:38.348  [2024-12-10 00:12:54.164594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.164612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0057 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.172657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef7970
00:31:38.348  [2024-12-10 00:12:54.174209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.174227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.179155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef7538
00:31:38.348  [2024-12-10 00:12:54.179992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.180013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.189791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efef90
00:31:38.348  [2024-12-10 00:12:54.190734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.190753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0068 p:0 m:0 dnr:0
00:31:38.348  [2024-12-10 00:12:54.199142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee1b48
00:31:38.348  [2024-12-10 00:12:54.200448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.348  [2024-12-10 00:12:54.200467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:31:38.607  [2024-12-10 00:12:54.207637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee0630
00:31:38.607  [2024-12-10 00:12:54.208487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.607  [2024-12-10 00:12:54.208505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:31:38.607  [2024-12-10 00:12:54.217867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef0350
00:31:38.607  [2024-12-10 00:12:54.219292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.607  [2024-12-10 00:12:54.219310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0
00:31:38.607  [2024-12-10 00:12:54.227347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee88f8
00:31:38.607  [2024-12-10 00:12:54.228899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.607  [2024-12-10 00:12:54.228916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:31:38.607  [2024-12-10 00:12:54.233760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efda78
00:31:38.607  [2024-12-10 00:12:54.234460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.607  [2024-12-10 00:12:54.234478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:31:38.607  [2024-12-10 00:12:54.242835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef35f0
00:31:38.607  [2024-12-10 00:12:54.243453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.607  [2024-12-10 00:12:54.243472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0
00:31:38.607  [2024-12-10 00:12:54.251115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee88f8
00:31:38.607  [2024-12-10 00:12:54.251796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.251813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.262064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efd208
00:31:38.608  [2024-12-10 00:12:54.263154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.263175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0058 p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.270575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef7da8
00:31:38.608  [2024-12-10 00:12:54.271602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.271619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.279954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee7818
00:31:38.608  [2024-12-10 00:12:54.281142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.281160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.289377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef92c0
00:31:38.608  [2024-12-10 00:12:54.290686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.290705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.297874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efbcf0
00:31:38.608  [2024-12-10 00:12:54.298942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.298961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.306830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef0350
00:31:38.608  [2024-12-10 00:12:54.307804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.307822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.315171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef7da8
00:31:38.608  [2024-12-10 00:12:54.316026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.316044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.325610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef46d0
00:31:38.608  [2024-12-10 00:12:54.326953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.326970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:004c p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.333854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee8d30
00:31:38.608  [2024-12-10 00:12:54.335232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.335250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.341570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef9b30
00:31:38.608  [2024-12-10 00:12:54.342278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:24279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.342296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.350675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef35f0
00:31:38.608  [2024-12-10 00:12:54.351422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.351439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.359895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef8e88
00:31:38.608  [2024-12-10 00:12:54.360615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.360633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.370137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef92c0
00:31:38.608  [2024-12-10 00:12:54.370904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:14888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.370922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.378586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee5ec8
00:31:38.608  [2024-12-10 00:12:54.379997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.380015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.386347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee8d30
00:31:38.608  [2024-12-10 00:12:54.387078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.387097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.395904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee49b0
00:31:38.608  [2024-12-10 00:12:54.396777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:12809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.396795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.406809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee8088
00:31:38.608  [2024-12-10 00:12:54.408037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.408063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.414174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee6738
00:31:38.608  [2024-12-10 00:12:54.414791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.414812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.422516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef31b8
00:31:38.608  [2024-12-10 00:12:54.423217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.423235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.433524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee12d8
00:31:38.608  [2024-12-10 00:12:54.434578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.434597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.442909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efdeb0
00:31:38.608  [2024-12-10 00:12:54.444135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.444154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.450283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee27f0
00:31:38.608  [2024-12-10 00:12:54.450903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.450920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003e p:0 m:0 dnr:0
00:31:38.608  [2024-12-10 00:12:54.459774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee12d8
00:31:38.608  [2024-12-10 00:12:54.460674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:21238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.608  [2024-12-10 00:12:54.460692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:001e p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.469395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eed0b0
00:31:38.868  [2024-12-10 00:12:54.470405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.470424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.478769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efbcf0
00:31:38.868  [2024-12-10 00:12:54.479775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.479795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.487491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ede038
00:31:38.868  [2024-12-10 00:12:54.488264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.488283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.498613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eef270
00:31:38.868  [2024-12-10 00:12:54.500221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.500240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.505047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eeff18
00:31:38.868  [2024-12-10 00:12:54.505715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.505735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.514476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eecc78
00:31:38.868  [2024-12-10 00:12:54.515382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.515401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.523973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee1b48
00:31:38.868  [2024-12-10 00:12:54.525123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.525142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.534785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee1b48
00:31:38.868  [2024-12-10 00:12:54.536304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.536322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.541131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef8e88
00:31:38.868  [2024-12-10 00:12:54.541766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.541784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.550262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef57b0
00:31:38.868  [2024-12-10 00:12:54.550884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.550903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:003d p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.560316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef8a50
00:31:38.868  [2024-12-10 00:12:54.561457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.561476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.567487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eef6a8
00:31:38.868  [2024-12-10 00:12:54.568187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.568204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.578358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee3498
00:31:38.868  [2024-12-10 00:12:54.579481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:22448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.579499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.587344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efe2e8
00:31:38.868  [2024-12-10 00:12:54.588531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.588549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.596374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eee5c8
00:31:38.868  [2024-12-10 00:12:54.597127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.597146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.604878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef20d8
00:31:38.868  [2024-12-10 00:12:54.606209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.606227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006c p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.612574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef0bc0
00:31:38.868  [2024-12-10 00:12:54.613230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.613248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.621979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efd640
00:31:38.868  [2024-12-10 00:12:54.622842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.622860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.632968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef20d8
00:31:38.868  [2024-12-10 00:12:54.634247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.868  [2024-12-10 00:12:54.634266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:31:38.868  [2024-12-10 00:12:54.641935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef1430
00:31:38.869  [2024-12-10 00:12:54.642967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.869  [2024-12-10 00:12:54.642986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:31:38.869  [2024-12-10 00:12:54.651372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eecc78
00:31:38.869  [2024-12-10 00:12:54.652629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.869  [2024-12-10 00:12:54.652650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0
00:31:38.869  [2024-12-10 00:12:54.660486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eff3c8
00:31:38.869  [2024-12-10 00:12:54.661850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.869  [2024-12-10 00:12:54.661868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0060 p:0 m:0 dnr:0
00:31:38.869  [2024-12-10 00:12:54.669316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eef6a8
00:31:38.869  [2024-12-10 00:12:54.670644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:24962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.869  [2024-12-10 00:12:54.670663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0
00:31:38.869  [2024-12-10 00:12:54.677729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee4578
00:31:38.869  [2024-12-10 00:12:54.678812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.869  [2024-12-10 00:12:54.678831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:31:38.869  [2024-12-10 00:12:54.686241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef1ca0
00:31:38.869  [2024-12-10 00:12:54.687093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.869  [2024-12-10 00:12:54.687112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0032 p:0 m:0 dnr:0
00:31:38.869  [2024-12-10 00:12:54.695040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef5378
00:31:38.869  [2024-12-10 00:12:54.695844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.869  [2024-12-10 00:12:54.695863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:31:38.869  [2024-12-10 00:12:54.706122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef3e60
00:31:38.869  [2024-12-10 00:12:54.707381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.869  [2024-12-10 00:12:54.707399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:38.869  [2024-12-10 00:12:54.712560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eec408
00:31:38.869  [2024-12-10 00:12:54.713096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.869  [2024-12-10 00:12:54.713114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0
00:31:38.869  [2024-12-10 00:12:54.722239] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eea680
00:31:38.869  [2024-12-10 00:12:54.722785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:38.869  [2024-12-10 00:12:54.722804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0031 p:0 m:0 dnr:0
00:31:39.128  [2024-12-10 00:12:54.732329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efc998
00:31:39.128  [2024-12-10 00:12:54.733369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.128  [2024-12-10 00:12:54.733388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0
00:31:39.128  [2024-12-10 00:12:54.741939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eec408
00:31:39.128  [2024-12-10 00:12:54.742931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.128  [2024-12-10 00:12:54.742948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0
00:31:39.129  [2024-12-10 00:12:54.751041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef7da8
00:31:39.129  [2024-12-10 00:12:54.752275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.752294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:006d p:0 m:0 dnr:0
00:31:39.129  [2024-12-10 00:12:54.760606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef9f68
00:31:39.129  [2024-12-10 00:12:54.761995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.762013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:39.129  [2024-12-10 00:12:54.770008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee5ec8
00:31:39.129  [2024-12-10 00:12:54.771482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.771499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0
00:31:39.129  [2024-12-10 00:12:54.776461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efc998
00:31:39.129  [2024-12-10 00:12:54.777255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.777274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0
00:31:39.129  [2024-12-10 00:12:54.787594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee0a68
00:31:39.129  [2024-12-10 00:12:54.789057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.789075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:39.129  [2024-12-10 00:12:54.794154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef7da8
00:31:39.129  [2024-12-10 00:12:54.794832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.794851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:39.129  [2024-12-10 00:12:54.805380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef7538
00:31:39.129  [2024-12-10 00:12:54.806584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.806602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0
00:31:39.129  [2024-12-10 00:12:54.814687] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ee3498
00:31:39.129  [2024-12-10 00:12:54.815936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.815953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:39.129  [2024-12-10 00:12:54.821865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eeaef0
00:31:39.129  [2024-12-10 00:12:54.822754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.822771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0011 p:0 m:0 dnr:0
00:31:39.129  [2024-12-10 00:12:54.831318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef5378
00:31:39.129  [2024-12-10 00:12:54.832337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.832354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:31:39.129  [2024-12-10 00:12:54.840422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eef6a8
00:31:39.129  [2024-12-10 00:12:54.840951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.840969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0
00:31:39.129  [2024-12-10 00:12:54.849773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016ef3a28
00:31:39.129  [2024-12-10 00:12:54.850456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.850475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0
00:31:39.129  [2024-12-10 00:12:54.858292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016eeaef0
00:31:39.129  [2024-12-10 00:12:54.859585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.859603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0
00:31:39.129      28193.00 IOPS,   110.13 MiB/s
[2024-12-09T23:12:54.986Z] [2024-12-10 00:12:54.866919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701410) with pdu=0x200016efc560
00:31:39.129  [2024-12-10 00:12:54.867593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:39.129  [2024-12-10 00:12:54.867611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0020 p:0 m:0 dnr:0
00:31:39.129  
00:31:39.129                                                                                                  Latency(us)
00:31:39.129  
[2024-12-09T23:12:54.986Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:39.129  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:31:39.129  	 nvme0n1             :       2.01   28217.62     110.23       0.00     0.00    4530.52    2122.12   12607.88
00:31:39.129  
[2024-12-09T23:12:54.986Z]  ===================================================================================================================
00:31:39.129  
[2024-12-09T23:12:54.986Z]  Total                       :              28217.62     110.23       0.00     0.00    4530.52    2122.12   12607.88
00:31:39.129  {
00:31:39.129    "results": [
00:31:39.129      {
00:31:39.129        "job": "nvme0n1",
00:31:39.129        "core_mask": "0x2",
00:31:39.129        "workload": "randwrite",
00:31:39.129        "status": "finished",
00:31:39.129        "queue_depth": 128,
00:31:39.129        "io_size": 4096,
00:31:39.129        "runtime": 2.005662,
00:31:39.129        "iops": 28217.61592930414,
00:31:39.129        "mibps": 110.22506222384429,
00:31:39.129        "io_failed": 0,
00:31:39.129        "io_timeout": 0,
00:31:39.129        "avg_latency_us": 4530.523842674979,
00:31:39.129        "min_latency_us": 2122.118095238095,
00:31:39.129        "max_latency_us": 12607.878095238095
00:31:39.129      }
00:31:39.129    ],
00:31:39.129    "core_count": 1
00:31:39.129  }
00:31:39.129    00:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:31:39.129    00:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:31:39.129    00:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:31:39.129  			| .driver_specific
00:31:39.129  			| .nvme_error
00:31:39.129  			| .status_code
00:31:39.129  			| .command_transient_transport_error'
00:31:39.129    00:12:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:31:39.389   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 222 > 0 ))
00:31:39.389   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3239355
00:31:39.389   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3239355 ']'
00:31:39.389   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3239355
00:31:39.389    00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:31:39.389   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:39.389    00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3239355
00:31:39.389   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:31:39.389   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:31:39.389   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3239355'
00:31:39.389  killing process with pid 3239355
00:31:39.389   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3239355
00:31:39.389  Received shutdown signal, test time was about 2.000000 seconds
00:31:39.389  
00:31:39.389                                                                                                  Latency(us)
00:31:39.389  
[2024-12-09T23:12:55.246Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:39.389  
[2024-12-09T23:12:55.246Z]  ===================================================================================================================
00:31:39.389  
[2024-12-09T23:12:55.246Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:31:39.389   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3239355
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3239818
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3239818 /var/tmp/bperf.sock
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3239818 ']'
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:31:39.649  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:39.649   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:39.649  [2024-12-10 00:12:55.353176] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:39.649  [2024-12-10 00:12:55.353223] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3239818 ]
00:31:39.650  I/O size of 131072 is greater than zero copy threshold (65536).
00:31:39.650  Zero copy mechanism will not be used.
00:31:39.650  [2024-12-10 00:12:55.427036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:39.650  [2024-12-10 00:12:55.466030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:39.908   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:39.908   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0
00:31:39.908   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:31:39.908   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1
00:31:40.166   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable
00:31:40.166   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:40.166   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:40.166   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:40.166   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:40.166   00:12:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0
00:31:40.425  nvme0n1
00:31:40.425   00:12:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32
00:31:40.425   00:12:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:40.425   00:12:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:40.425   00:12:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:40.425   00:12:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests
00:31:40.425   00:12:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:31:40.425  I/O size of 131072 is greater than zero copy threshold (65536).
00:31:40.425  Zero copy mechanism will not be used.
00:31:40.425  Running I/O for 2 seconds...
00:31:40.425  [2024-12-10 00:12:56.198162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.198254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.425  [2024-12-10 00:12:56.198286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.425  [2024-12-10 00:12:56.202661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.202719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.425  [2024-12-10 00:12:56.202739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.425  [2024-12-10 00:12:56.207235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.207291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.425  [2024-12-10 00:12:56.207310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.425  [2024-12-10 00:12:56.211922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.211972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.425  [2024-12-10 00:12:56.211991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.425  [2024-12-10 00:12:56.216576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.216628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.425  [2024-12-10 00:12:56.216647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.425  [2024-12-10 00:12:56.220942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.220999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.425  [2024-12-10 00:12:56.221018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.425  [2024-12-10 00:12:56.225260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.225327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.425  [2024-12-10 00:12:56.225356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.425  [2024-12-10 00:12:56.229541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.229608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.425  [2024-12-10 00:12:56.229626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.425  [2024-12-10 00:12:56.233793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.233862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.425  [2024-12-10 00:12:56.233881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.425  [2024-12-10 00:12:56.238016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.238071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.425  [2024-12-10 00:12:56.238089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.425  [2024-12-10 00:12:56.242237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.242308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.425  [2024-12-10 00:12:56.242326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.425  [2024-12-10 00:12:56.246490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.246540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.425  [2024-12-10 00:12:56.246558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.425  [2024-12-10 00:12:56.251255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.251322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.425  [2024-12-10 00:12:56.251341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.425  [2024-12-10 00:12:56.255556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.425  [2024-12-10 00:12:56.255627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.426  [2024-12-10 00:12:56.255645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.426  [2024-12-10 00:12:56.260482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.426  [2024-12-10 00:12:56.260657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.426  [2024-12-10 00:12:56.260675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.426  [2024-12-10 00:12:56.266624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.426  [2024-12-10 00:12:56.266815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.426  [2024-12-10 00:12:56.266834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.426  [2024-12-10 00:12:56.271540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.426  [2024-12-10 00:12:56.271643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.426  [2024-12-10 00:12:56.271661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.426  [2024-12-10 00:12:56.276025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.426  [2024-12-10 00:12:56.276134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.426  [2024-12-10 00:12:56.276156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.426  [2024-12-10 00:12:56.280357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.426  [2024-12-10 00:12:56.280448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.426  [2024-12-10 00:12:56.280466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.284800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.284901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.284920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.289102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.289207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.289226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.293839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.293991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.294008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.299758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.299939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.299957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.304984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.305054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.305073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.310660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.310852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.310871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.317282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.317429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.317447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.323288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.323431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.323449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.329396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.329545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.329563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.335622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.335798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.335816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.341739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.341935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.341954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.348319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.348420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.348438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.355721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.355883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.355901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.361815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.361874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.686  [2024-12-10 00:12:56.361892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.686  [2024-12-10 00:12:56.366744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.686  [2024-12-10 00:12:56.366796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.366814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.371283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.371354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.371372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.375746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.375834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.375851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.380017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.380079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.380097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.384119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.384187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.384206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.388507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.388562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.388580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.392844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.392948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.392966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.397210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.397297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.397315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.401422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.401484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.401502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.405982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.406050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.406068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.410376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.410468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.410493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.414768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.414869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.414888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.419149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.419260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.419278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.423668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.423720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.423737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.427962] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.428039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.428057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.432633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.432682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.432700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.436940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.436995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.437013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.441306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.441377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.441395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.445674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.445753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.445770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.449896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.449956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.449974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.454178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.454245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.454263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.687  [2024-12-10 00:12:56.458153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.687  [2024-12-10 00:12:56.458221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.687  [2024-12-10 00:12:56.458239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.462369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.462441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.462459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.466613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.466707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.466725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.471593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.471672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.471690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.476251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.476301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.476319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.480739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.480788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.480806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.485219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.485275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.485294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.490279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.490347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.490366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.494869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.494919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.494937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.499426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.499543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.499562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.503623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.503693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.503712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.507747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.507859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.507877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.512047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.512139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.512156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.516469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.516542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.516561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.520835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.520910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.520929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.525130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.525190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.525212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.529504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.529566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.529584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.533723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.533774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.533792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.688  [2024-12-10 00:12:56.538724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.688  [2024-12-10 00:12:56.538783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.688  [2024-12-10 00:12:56.538801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.544096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.544171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.544189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.549316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.549367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.549386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.553893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.553945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.553963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.558700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.558756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.558774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.563900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.564017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.564034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.568636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.568705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.568723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.573888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.573978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.573996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.578346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.578410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.578428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.582628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.582732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.582751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.586879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.586982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.587000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.590899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.590955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.590972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.595037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.595135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.595153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.599394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.599444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.599463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.603669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.603732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.949  [2024-12-10 00:12:56.603754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.949  [2024-12-10 00:12:56.607934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.949  [2024-12-10 00:12:56.608020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.608038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.611926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.611983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.612002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.615965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.616035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.616054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.619970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.620044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.620063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.624010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.624080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.624098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.628010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.628078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.628096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.632052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.632127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.632146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.636094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.636162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.636188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.640093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.640172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.640192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.644117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.644194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.644212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.648127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.648198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.648216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.652098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.652160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.652185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.656062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.656121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.656139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.660019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.660093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.660111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.664024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.664093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.664113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.667995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.668061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.668080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.672226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.672340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.672358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.676568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.676620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.676638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.681328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.681431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.681450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.686146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.686250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.686268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.691088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.691186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.691219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.696287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.950  [2024-12-10 00:12:56.696358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.950  [2024-12-10 00:12:56.696377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.950  [2024-12-10 00:12:56.700648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.700732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.700750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.704926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.704985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.705003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.709160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.709239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.709257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.713303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.713356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.713378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.717600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.717654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.717671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.721954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.722005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.722022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.726294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.726369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.726387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.730543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.730602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.730619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.734979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.735030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.735048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.739348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.739453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.739470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.743690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.743760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.743778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.747794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.747871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.747889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.752025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.752096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.752115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.756377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.756461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.756479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.761125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.761184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.761202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.765845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.765985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.766003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.770230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.770301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.770319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.774583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.774652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.774671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.778950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.779017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.779035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.783252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.783321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.783341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.787317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.787392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.787421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:40.951  [2024-12-10 00:12:56.791750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.951  [2024-12-10 00:12:56.791825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.951  [2024-12-10 00:12:56.791843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:40.952  [2024-12-10 00:12:56.796579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.952  [2024-12-10 00:12:56.796632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.952  [2024-12-10 00:12:56.796651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:40.952  [2024-12-10 00:12:56.801745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:40.952  [2024-12-10 00:12:56.801798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:40.952  [2024-12-10 00:12:56.801816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.806548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.806598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.806616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.811381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.811432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.811450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.816322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.816402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.816420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.821881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.821941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.821959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.826758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.826840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.826857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.831424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.831499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.831520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.836041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.836103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.836120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.840327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.840381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.840399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.844592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.844694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.844712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.849119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.849184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.849201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.853548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.853601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.853619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.858011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.858060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.858078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.862514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.862615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.862634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.868253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.868435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.868453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.874854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.875012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.875031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.881780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.881908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.881926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.888912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.213  [2024-12-10 00:12:56.889034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.213  [2024-12-10 00:12:56.889052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.213  [2024-12-10 00:12:56.895682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.895876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.895896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.903360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.903522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.903550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.911025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.911165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.911190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.917698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.917850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.917867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.925038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.925193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.925211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.932386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.932492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.932514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.939943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.940075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.940094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.947433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.947554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.947571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.954753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.954879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.954898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.961322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.961455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.961474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.966594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.966669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.966688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.971057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.971126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.971145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.975378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.975430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.975448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.979489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.979540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.979558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.983667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.983730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.983749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.987811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.987868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.987886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.991942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.992001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.992019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:56.996120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:56.996178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:56.996196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:57.000299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:57.000349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:57.000367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:57.004362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:57.004413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:57.004431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:57.008485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:57.008538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:57.008556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:57.012581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.214  [2024-12-10 00:12:57.012630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.214  [2024-12-10 00:12:57.012648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.214  [2024-12-10 00:12:57.016643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.215  [2024-12-10 00:12:57.016713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.215  [2024-12-10 00:12:57.016732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.215  [2024-12-10 00:12:57.020806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.215  [2024-12-10 00:12:57.020865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.215  [2024-12-10 00:12:57.020883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.215  [2024-12-10 00:12:57.024892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.215  [2024-12-10 00:12:57.024958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.215  [2024-12-10 00:12:57.024977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.215  [2024-12-10 00:12:57.029005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.215  [2024-12-10 00:12:57.029055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.215  [2024-12-10 00:12:57.029073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.215  [2024-12-10 00:12:57.033082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.215  [2024-12-10 00:12:57.033174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.215  [2024-12-10 00:12:57.033208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.215  [2024-12-10 00:12:57.037188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.215  [2024-12-10 00:12:57.037253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.215  [2024-12-10 00:12:57.037272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.215  [2024-12-10 00:12:57.041309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.215  [2024-12-10 00:12:57.041379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.215  [2024-12-10 00:12:57.041397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.215  [2024-12-10 00:12:57.045339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.215  [2024-12-10 00:12:57.045410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.215  [2024-12-10 00:12:57.045429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.215  [2024-12-10 00:12:57.049896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.215  [2024-12-10 00:12:57.049980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.215  [2024-12-10 00:12:57.049998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.215  [2024-12-10 00:12:57.055453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.215  [2024-12-10 00:12:57.055626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.215  [2024-12-10 00:12:57.055648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.215  [2024-12-10 00:12:57.061381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.215  [2024-12-10 00:12:57.061462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.215  [2024-12-10 00:12:57.061481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.215  [2024-12-10 00:12:57.067596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.215  [2024-12-10 00:12:57.067709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.215  [2024-12-10 00:12:57.067727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.074700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.074867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.074886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.081971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.082086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.082105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.089102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.089251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.089270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.096313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.096450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.096468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.103915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.104004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.104021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.110675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.110855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.110873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.117270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.117440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.117458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.123525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.123706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.123724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.129620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.129791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.129808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.135743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.135928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.135954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.141898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.142033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.142052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.147172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.147281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.147299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.151769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.151853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.151871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.156584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.156706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.156724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.161496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.161600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.161619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.166221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.476  [2024-12-10 00:12:57.166367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.476  [2024-12-10 00:12:57.166385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.476  [2024-12-10 00:12:57.172469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.172622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.172639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.177632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.177707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.177725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.182402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.182505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.182524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.477       6316.00 IOPS,   789.50 MiB/s
[2024-12-09T23:12:57.334Z] [2024-12-10 00:12:57.187947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.188069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.188087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.192334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.192447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.192466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.197555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.197742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.197767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.203468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.203578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.203596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.208584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.208691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.208714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.213372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.213484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.213503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.217978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.218046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.218064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.222721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.222823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.222841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.227471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.227589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.227607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.232123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.232197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.232215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.236680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.236773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.236791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.241425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.241504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.241521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.246124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.246248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.246266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.250451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.250525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.250543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.255266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.255359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.255377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.261149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.261289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.261307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.266159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.266287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.266306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.271501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.271613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.271631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.275798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.477  [2024-12-10 00:12:57.275874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.477  [2024-12-10 00:12:57.275893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.477  [2024-12-10 00:12:57.280063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.478  [2024-12-10 00:12:57.280118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.478  [2024-12-10 00:12:57.280136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.478  [2024-12-10 00:12:57.284277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.478  [2024-12-10 00:12:57.284347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.478  [2024-12-10 00:12:57.284380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.478  [2024-12-10 00:12:57.288495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.478  [2024-12-10 00:12:57.288551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.478  [2024-12-10 00:12:57.288576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.478  [2024-12-10 00:12:57.292591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.478  [2024-12-10 00:12:57.292665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.478  [2024-12-10 00:12:57.292683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.478  [2024-12-10 00:12:57.296672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.478  [2024-12-10 00:12:57.296726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.478  [2024-12-10 00:12:57.296744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.478  [2024-12-10 00:12:57.300796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.478  [2024-12-10 00:12:57.300851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.478  [2024-12-10 00:12:57.300869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.478  [2024-12-10 00:12:57.304977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.478  [2024-12-10 00:12:57.305027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.478  [2024-12-10 00:12:57.305046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.478  [2024-12-10 00:12:57.309364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.478  [2024-12-10 00:12:57.309430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.478  [2024-12-10 00:12:57.309448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.478  [2024-12-10 00:12:57.314584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.478  [2024-12-10 00:12:57.314636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.478  [2024-12-10 00:12:57.314654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.478  [2024-12-10 00:12:57.319871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.478  [2024-12-10 00:12:57.319935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.478  [2024-12-10 00:12:57.319954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.478  [2024-12-10 00:12:57.324299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.478  [2024-12-10 00:12:57.324351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.478  [2024-12-10 00:12:57.324369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.478  [2024-12-10 00:12:57.328820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.478  [2024-12-10 00:12:57.328883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.478  [2024-12-10 00:12:57.328901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.737  [2024-12-10 00:12:57.333308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.737  [2024-12-10 00:12:57.333403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.333422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.337692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.337761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.337779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.342137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.342204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.342222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.346541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.346591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.346609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.351049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.351146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.351164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.355448] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.355501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.355520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.359830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.359926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.359944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.364333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.364406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.364424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.368756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.368807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.368825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.373284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.373347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.373364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.377504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.377554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.377572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.381991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.382102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.382121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.386571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.386641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.386659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.391326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.391448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.391466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.396553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.396615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.396633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.401596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.401689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.401708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.406290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.406408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.406431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.410980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.411078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.411096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.416014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.416068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.416087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.420780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.420832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.420849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.425894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.426016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.426035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.430521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.430631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.430649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.738  [2024-12-10 00:12:57.434996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.738  [2024-12-10 00:12:57.435073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.738  [2024-12-10 00:12:57.435091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.439334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.439400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.439429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.443726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.443839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.443858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.448252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.448309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.448328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.452790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.452864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.452882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.457586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.457661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.457679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.462144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.462224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.462242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.466576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.466640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.466658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.470922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.471002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.471020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.475329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.475395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.475414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.480231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.480303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.480322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.485250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.485309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.485328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.491054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.491106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.491124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.495946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.496016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.496035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.500652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.500702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.500721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.505202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.505272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.505290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.509579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.509629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.509647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.513862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.513920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.513938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.518314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.518376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.518395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.522656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.522712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.522730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.527321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.527381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.527403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.531679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.531736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.739  [2024-12-10 00:12:57.531754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.739  [2024-12-10 00:12:57.536132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.739  [2024-12-10 00:12:57.536228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.740  [2024-12-10 00:12:57.536247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.740  [2024-12-10 00:12:57.540634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.740  [2024-12-10 00:12:57.540703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.740  [2024-12-10 00:12:57.540722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.740  [2024-12-10 00:12:57.545454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.740  [2024-12-10 00:12:57.545507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.740  [2024-12-10 00:12:57.545526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.740  [2024-12-10 00:12:57.550577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.740  [2024-12-10 00:12:57.550628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.740  [2024-12-10 00:12:57.550647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.740  [2024-12-10 00:12:57.555960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.740  [2024-12-10 00:12:57.556015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.740  [2024-12-10 00:12:57.556033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.740  [2024-12-10 00:12:57.561583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.740  [2024-12-10 00:12:57.561650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.740  [2024-12-10 00:12:57.561669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.740  [2024-12-10 00:12:57.566915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.740  [2024-12-10 00:12:57.566999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.740  [2024-12-10 00:12:57.567017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:41.740  [2024-12-10 00:12:57.573718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.740  [2024-12-10 00:12:57.573893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.740  [2024-12-10 00:12:57.573912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:41.740  [2024-12-10 00:12:57.580096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.740  [2024-12-10 00:12:57.580198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.740  [2024-12-10 00:12:57.580217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:41.740  [2024-12-10 00:12:57.585914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.740  [2024-12-10 00:12:57.586043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.740  [2024-12-10 00:12:57.586062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:41.740  [2024-12-10 00:12:57.591153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:41.740  [2024-12-10 00:12:57.591220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:41.740  [2024-12-10 00:12:57.591239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.596344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.596466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.596484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.600903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.600954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.600973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.605483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.605535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.605554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.609896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.609949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.609968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.614132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.614212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.614231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.618512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.618584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.618602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.623044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.623097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.623115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.627873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.627959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.627977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.632802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.632854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.632872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.638232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.638290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.638308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.642943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.643015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.643032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.647371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.647439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.647458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.651768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.651851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.651870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.656009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.656088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.656106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.660317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.660386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.660405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.664747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.664799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.664817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.669021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.669079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.669098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.673153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.673212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.673247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.677570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.677637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.677656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.001  [2024-12-10 00:12:57.682414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.001  [2024-12-10 00:12:57.682481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.001  [2024-12-10 00:12:57.682501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.687185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.687337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.687356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.691992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.692047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.692065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.696362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.696411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.696429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.700802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.700855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.700873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.705419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.705486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.705504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.710207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.710369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.710390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.716472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.716659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.716677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.722660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.722761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.722779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.728528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.728680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.728698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.734324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.734395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.734413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.739402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.739487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.739510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.744130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.744200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.744219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.748893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.748968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.748986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.753320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.753370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.753388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.757719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.757769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.757788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.761930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.761993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.762011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.766411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.766462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.766480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.770881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.770948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.770965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.775334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.775402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.775420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.779920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.779997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.780016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.784518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.784586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.002  [2024-12-10 00:12:57.784605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.002  [2024-12-10 00:12:57.789018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.002  [2024-12-10 00:12:57.789088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.789105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.793268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.793344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.793362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.797535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.797604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.797623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.801831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.801903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.801922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.806197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.806249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.806266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.810434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.810497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.810514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.814699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.814767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.814784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.818975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.819031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.819049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.823158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.823235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.823253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.827413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.827494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.827512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.831582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.831644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.831662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.835801] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.835856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.835874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.839947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.840016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.840034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.844082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.844152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.844177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.848322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.848379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.848398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.852478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.852541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.852563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.003  [2024-12-10 00:12:57.856702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.003  [2024-12-10 00:12:57.856798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.003  [2024-12-10 00:12:57.856816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.264  [2024-12-10 00:12:57.860816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.264  [2024-12-10 00:12:57.860892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.264  [2024-12-10 00:12:57.860911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.264  [2024-12-10 00:12:57.865133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.264  [2024-12-10 00:12:57.865219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.264  [2024-12-10 00:12:57.865238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.264  [2024-12-10 00:12:57.869659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.264  [2024-12-10 00:12:57.869746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.264  [2024-12-10 00:12:57.869763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.264  [2024-12-10 00:12:57.873973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.264  [2024-12-10 00:12:57.874035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.264  [2024-12-10 00:12:57.874053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.264  [2024-12-10 00:12:57.878047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.264  [2024-12-10 00:12:57.878097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.264  [2024-12-10 00:12:57.878115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.264  [2024-12-10 00:12:57.882105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.264  [2024-12-10 00:12:57.882164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.264  [2024-12-10 00:12:57.882188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.264  [2024-12-10 00:12:57.886243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.264  [2024-12-10 00:12:57.886310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.264  [2024-12-10 00:12:57.886328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.264  [2024-12-10 00:12:57.890472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.264  [2024-12-10 00:12:57.890534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.264  [2024-12-10 00:12:57.890552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.264  [2024-12-10 00:12:57.894643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.264  [2024-12-10 00:12:57.894703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.264  [2024-12-10 00:12:57.894720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.264  [2024-12-10 00:12:57.898809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.264  [2024-12-10 00:12:57.898866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.264  [2024-12-10 00:12:57.898884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.264  [2024-12-10 00:12:57.903004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.264  [2024-12-10 00:12:57.903067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.264  [2024-12-10 00:12:57.903084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.264  [2024-12-10 00:12:57.907202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.264  [2024-12-10 00:12:57.907271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.264  [2024-12-10 00:12:57.907289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.264  [2024-12-10 00:12:57.911383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.264  [2024-12-10 00:12:57.911452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.911470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.915596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.915665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.915684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.919699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.919768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.919787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.924059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.924152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.924176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.929580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.929754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.929771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.935049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.935131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.935150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.940075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.940173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.940191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.944779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.944828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.944846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.949284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.949352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.949371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.954193] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.954272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.954290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.959670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.959739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.959758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.965839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.966006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.966024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.972736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.972884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.972905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.980450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.980616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.980634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.987157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.987278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.987297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.992885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.992981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.992999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:57.998864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:57.998916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:57.998933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:58.003632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:58.003750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:58.003768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:58.009433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:58.009488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:58.009507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:58.014121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:58.014177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:58.014196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:58.019042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:58.019097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:58.019115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:58.023991] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:58.024060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:58.024079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.265  [2024-12-10 00:12:58.029141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.265  [2024-12-10 00:12:58.029219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.265  [2024-12-10 00:12:58.029237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.034161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.034238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.034256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.039383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.039470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.039498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.044639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.044693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.044711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.049535] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.049591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.049609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.054746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.054798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.054816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.059904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.059956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.059974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.065210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.065280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.065303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.070695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.070767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.070785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.075487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.075579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.075597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.080288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.080339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.080357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.085172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.085251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.085270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.089994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.090062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.090080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.094496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.094607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.094625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.099529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.099582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.099600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.104937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.105012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.105031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.110133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.110195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.110229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.115153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.115258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.115275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.266  [2024-12-10 00:12:58.120027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.266  [2024-12-10 00:12:58.120125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.266  [2024-12-10 00:12:58.120144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.125067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.125120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.125138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.130025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.130093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.130111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.135203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.135254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.135271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.140082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.140136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.140154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.145126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.145207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.145225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.150005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.150092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.150110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.154906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.155066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.155084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.160080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.160151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.160175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.164711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.164789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.164807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.170005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.170069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.170087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.174877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.174930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.174948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.179576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.179628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.179645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.184108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.184162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.184186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0
00:31:42.527  [2024-12-10 00:12:58.189124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1701750) with pdu=0x200016efef90
00:31:42.527  [2024-12-10 00:12:58.190225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:42.527  [2024-12-10 00:12:58.190245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0
00:31:42.527       6415.00 IOPS,   801.88 MiB/s
00:31:42.527                                                                                                  Latency(us)
00:31:42.527  
[2024-12-09T23:12:58.384Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:42.527  Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072)
00:31:42.528  	 nvme0n1             :       2.00    6412.61     801.58       0.00     0.00    2490.93    1700.82   12420.63
00:31:42.528  
[2024-12-09T23:12:58.385Z]  ===================================================================================================================
00:31:42.528  
[2024-12-09T23:12:58.385Z]  Total                       :               6412.61     801.58       0.00     0.00    2490.93    1700.82   12420.63
00:31:42.528  {
00:31:42.528    "results": [
00:31:42.528      {
00:31:42.528        "job": "nvme0n1",
00:31:42.528        "core_mask": "0x2",
00:31:42.528        "workload": "randwrite",
00:31:42.528        "status": "finished",
00:31:42.528        "queue_depth": 16,
00:31:42.528        "io_size": 131072,
00:31:42.528        "runtime": 2.003241,
00:31:42.528        "iops": 6412.60836813943,
00:31:42.528        "mibps": 801.5760460174288,
00:31:42.528        "io_failed": 0,
00:31:42.528        "io_timeout": 0,
00:31:42.528        "avg_latency_us": 2490.926328744171,
00:31:42.528        "min_latency_us": 1700.815238095238,
00:31:42.528        "max_latency_us": 12420.63238095238
00:31:42.528      }
00:31:42.528    ],
00:31:42.528    "core_count": 1
00:31:42.528  }
00:31:42.528    00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1
00:31:42.528    00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1
00:31:42.528    00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0]
00:31:42.528  			| .driver_specific
00:31:42.528  			| .nvme_error
00:31:42.528  			| .status_code
00:31:42.528  			| .command_transient_transport_error'
00:31:42.528    00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 414 > 0 ))
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3239818
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3239818 ']'
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3239818
00:31:42.787    00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:42.787    00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3239818
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3239818'
00:31:42.787  killing process with pid 3239818
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3239818
00:31:42.787  Received shutdown signal, test time was about 2.000000 seconds
00:31:42.787  
00:31:42.787                                                                                                  Latency(us)
00:31:42.787  
[2024-12-09T23:12:58.644Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:42.787  
[2024-12-09T23:12:58.644Z]  ===================================================================================================================
00:31:42.787  
[2024-12-09T23:12:58.644Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3239818
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3238147
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3238147 ']'
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3238147
00:31:42.787    00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname
00:31:42.787   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:31:42.787    00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3238147
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3238147'
00:31:43.047  killing process with pid 3238147
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3238147
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3238147
00:31:43.047  
00:31:43.047  real	0m13.958s
00:31:43.047  user	0m26.701s
00:31:43.047  sys	0m4.533s
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x
00:31:43.047  ************************************
00:31:43.047  END TEST nvmf_digest_error
00:31:43.047  ************************************
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20}
00:31:43.047   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:31:43.047  rmmod nvme_tcp
00:31:43.306  rmmod nvme_fabrics
00:31:43.306  rmmod nvme_keyring
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3238147 ']'
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3238147
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3238147 ']'
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3238147
00:31:43.306  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3238147) - No such process
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3238147 is not found'
00:31:43.306  Process with pid 3238147 is not found
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:31:43.306   00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:31:43.306    00:12:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:31:45.212   00:13:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:31:45.212  
00:31:45.212  real	0m36.219s
00:31:45.212  user	0m55.057s
00:31:45.212  sys	0m13.678s
00:31:45.212   00:13:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable
00:31:45.212   00:13:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x
00:31:45.212  ************************************
00:31:45.212  END TEST nvmf_digest
00:31:45.212  ************************************
00:31:45.212   00:13:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]]
00:31:45.212   00:13:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]]
00:31:45.471   00:13:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]]
00:31:45.471   00:13:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp
00:31:45.471   00:13:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:31:45.471   00:13:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:31:45.471   00:13:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:31:45.471  ************************************
00:31:45.471  START TEST nvmf_bdevperf
00:31:45.471  ************************************
00:31:45.471   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp
00:31:45.471  * Looking for test storage...
00:31:45.471  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:31:45.471     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version
00:31:45.471     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-:
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-:
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<'
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 ))
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:31:45.471     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1
00:31:45.471     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1
00:31:45.471     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:31:45.471     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1
00:31:45.471     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2
00:31:45.471     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2
00:31:45.471     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:31:45.471     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:31:45.471    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:31:45.471  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:45.472  		--rc genhtml_branch_coverage=1
00:31:45.472  		--rc genhtml_function_coverage=1
00:31:45.472  		--rc genhtml_legend=1
00:31:45.472  		--rc geninfo_all_blocks=1
00:31:45.472  		--rc geninfo_unexecuted_blocks=1
00:31:45.472  		
00:31:45.472  		'
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:31:45.472  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:45.472  		--rc genhtml_branch_coverage=1
00:31:45.472  		--rc genhtml_function_coverage=1
00:31:45.472  		--rc genhtml_legend=1
00:31:45.472  		--rc geninfo_all_blocks=1
00:31:45.472  		--rc geninfo_unexecuted_blocks=1
00:31:45.472  		
00:31:45.472  		'
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:31:45.472  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:45.472  		--rc genhtml_branch_coverage=1
00:31:45.472  		--rc genhtml_function_coverage=1
00:31:45.472  		--rc genhtml_legend=1
00:31:45.472  		--rc geninfo_all_blocks=1
00:31:45.472  		--rc geninfo_unexecuted_blocks=1
00:31:45.472  		
00:31:45.472  		'
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:31:45.472  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:31:45.472  		--rc genhtml_branch_coverage=1
00:31:45.472  		--rc genhtml_function_coverage=1
00:31:45.472  		--rc genhtml_legend=1
00:31:45.472  		--rc geninfo_all_blocks=1
00:31:45.472  		--rc geninfo_unexecuted_blocks=1
00:31:45.472  		
00:31:45.472  		'
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:31:45.472     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:31:45.472     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:31:45.472     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob
00:31:45.472     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:31:45.472     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:31:45.472     00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:31:45.472      00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:45.472      00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:45.472      00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:45.472      00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH
00:31:45.472      00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:31:45.472  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:31:45.472    00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable
00:31:45.472   00:13:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=()
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=()
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=()
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=()
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=()
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=()
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=()
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:31:52.040  Found 0000:af:00.0 (0x8086 - 0x159b)
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:31:52.040  Found 0000:af:00.1 (0x8086 - 0x159b)
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:31:52.040   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]]
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:31:52.041  Found net devices under 0000:af:00.0: cvl_0_0
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]]
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:31:52.041  Found net devices under 0000:af:00.1: cvl_0_1
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:31:52.041   00:13:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:31:52.041  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:31:52.041  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms
00:31:52.041  
00:31:52.041  --- 10.0.0.2 ping statistics ---
00:31:52.041  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:52.041  rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:31:52.041  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:31:52.041  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms
00:31:52.041  
00:31:52.041  --- 10.0.0.1 ping statistics ---
00:31:52.041  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:31:52.041  rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3243768
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3243768
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3243768 ']'
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:31:52.041  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:52.041  [2024-12-10 00:13:07.281720] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:52.041  [2024-12-10 00:13:07.281765] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:31:52.041  [2024-12-10 00:13:07.360226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:31:52.041  [2024-12-10 00:13:07.400770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:31:52.041  [2024-12-10 00:13:07.400805] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:31:52.041  [2024-12-10 00:13:07.400813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:31:52.041  [2024-12-10 00:13:07.400819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:31:52.041  [2024-12-10 00:13:07.400824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:31:52.041  [2024-12-10 00:13:07.402104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:31:52.041  [2024-12-10 00:13:07.402210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:52.041  [2024-12-10 00:13:07.402210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:31:52.041   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:52.042  [2024-12-10 00:13:07.533965] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:52.042  Malloc0
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:52.042  [2024-12-10 00:13:07.593611] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:52.042   00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1
00:31:52.042    00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json
00:31:52.042    00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=()
00:31:52.042    00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config
00:31:52.042    00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:31:52.042    00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:31:52.042  {
00:31:52.042    "params": {
00:31:52.042      "name": "Nvme$subsystem",
00:31:52.042      "trtype": "$TEST_TRANSPORT",
00:31:52.042      "traddr": "$NVMF_FIRST_TARGET_IP",
00:31:52.042      "adrfam": "ipv4",
00:31:52.042      "trsvcid": "$NVMF_PORT",
00:31:52.042      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:31:52.042      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:31:52.042      "hdgst": ${hdgst:-false},
00:31:52.042      "ddgst": ${ddgst:-false}
00:31:52.042    },
00:31:52.042    "method": "bdev_nvme_attach_controller"
00:31:52.042  }
00:31:52.042  EOF
00:31:52.042  )")
00:31:52.042     00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat
00:31:52.042    00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq .
00:31:52.042     00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=,
00:31:52.042     00:13:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:31:52.042    "params": {
00:31:52.042      "name": "Nvme1",
00:31:52.042      "trtype": "tcp",
00:31:52.042      "traddr": "10.0.0.2",
00:31:52.042      "adrfam": "ipv4",
00:31:52.042      "trsvcid": "4420",
00:31:52.042      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:31:52.042      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:31:52.042      "hdgst": false,
00:31:52.042      "ddgst": false
00:31:52.042    },
00:31:52.042    "method": "bdev_nvme_attach_controller"
00:31:52.042  }'
00:31:52.042  [2024-12-10 00:13:07.644581] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:52.042  [2024-12-10 00:13:07.644631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3243981 ]
00:31:52.042  [2024-12-10 00:13:07.717181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:52.042  [2024-12-10 00:13:07.756866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:31:52.300  Running I/O for 1 seconds...
00:31:53.235      11250.00 IOPS,    43.95 MiB/s
00:31:53.235                                                                                                  Latency(us)
00:31:53.235  
[2024-12-09T23:13:09.092Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:31:53.235  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:31:53.235  	 Verification LBA range: start 0x0 length 0x4000
00:31:53.235  	 Nvme1n1             :       1.01   11320.24      44.22       0.00     0.00   11260.93    1061.06   15978.30
00:31:53.235  
[2024-12-09T23:13:09.093Z]  ===================================================================================================================
00:31:53.236  
[2024-12-09T23:13:09.093Z]  Total                       :              11320.24      44.22       0.00     0.00   11260.93    1061.06   15978.30
00:31:53.236   00:13:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3244229
00:31:53.236   00:13:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3
00:31:53.236   00:13:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f
00:31:53.236    00:13:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json
00:31:53.236    00:13:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=()
00:31:53.236    00:13:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config
00:31:53.236    00:13:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:31:53.236    00:13:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:31:53.236  {
00:31:53.236    "params": {
00:31:53.236      "name": "Nvme$subsystem",
00:31:53.236      "trtype": "$TEST_TRANSPORT",
00:31:53.236      "traddr": "$NVMF_FIRST_TARGET_IP",
00:31:53.236      "adrfam": "ipv4",
00:31:53.236      "trsvcid": "$NVMF_PORT",
00:31:53.236      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:31:53.236      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:31:53.236      "hdgst": ${hdgst:-false},
00:31:53.236      "ddgst": ${ddgst:-false}
00:31:53.236    },
00:31:53.236    "method": "bdev_nvme_attach_controller"
00:31:53.236  }
00:31:53.236  EOF
00:31:53.236  )")
00:31:53.236     00:13:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat
00:31:53.236    00:13:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq .
00:31:53.236     00:13:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=,
00:31:53.236     00:13:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:31:53.236    "params": {
00:31:53.236      "name": "Nvme1",
00:31:53.236      "trtype": "tcp",
00:31:53.236      "traddr": "10.0.0.2",
00:31:53.236      "adrfam": "ipv4",
00:31:53.236      "trsvcid": "4420",
00:31:53.236      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:31:53.236      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:31:53.236      "hdgst": false,
00:31:53.236      "ddgst": false
00:31:53.236    },
00:31:53.236    "method": "bdev_nvme_attach_controller"
00:31:53.236  }'
00:31:53.494  [2024-12-10 00:13:09.121863] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:53.494  [2024-12-10 00:13:09.121913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244229 ]
00:31:53.494  [2024-12-10 00:13:09.196965] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:31:53.494  [2024-12-10 00:13:09.234004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:31:53.753  Running I/O for 15 seconds...
00:31:55.625      11158.00 IOPS,    43.59 MiB/s
[2024-12-09T23:13:12.422Z]     11305.00 IOPS,    44.16 MiB/s
[2024-12-09T23:13:12.422Z]  00:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3243768
00:31:56.565   00:13:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3
00:31:56.565  [2024-12-10 00:13:12.091900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.565  [2024-12-10 00:13:12.091936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.091954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.565  [2024-12-10 00:13:12.091962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.091972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.565  [2024-12-10 00:13:12.091979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.091988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.565  [2024-12-10 00:13:12.092001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.565  [2024-12-10 00:13:12.092016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.565  [2024-12-10 00:13:12.092032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.565  [2024-12-10 00:13:12.092046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.565  [2024-12-10 00:13:12.092061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.565  [2024-12-10 00:13:12.092076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.565  [2024-12-10 00:13:12.092092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.565  [2024-12-10 00:13:12.092107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.565  [2024-12-10 00:13:12.092123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.565  [2024-12-10 00:13:12.092140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.565  [2024-12-10 00:13:12.092155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.565  [2024-12-10 00:13:12.092178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.565  [2024-12-10 00:13:12.092194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.565  [2024-12-10 00:13:12.092213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.565  [2024-12-10 00:13:12.092230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.565  [2024-12-10 00:13:12.092247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.565  [2024-12-10 00:13:12.092267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.565  [2024-12-10 00:13:12.092277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.566  [2024-12-10 00:13:12.092343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.566  [2024-12-10 00:13:12.092736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.566  [2024-12-10 00:13:12.092742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.092991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.092998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.567  [2024-12-10 00:13:12.093304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.567  [2024-12-10 00:13:12.093310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:110440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:110536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.568  [2024-12-10 00:13:12.093620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.568  [2024-12-10 00:13:12.093634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.568  [2024-12-10 00:13:12.093648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.568  [2024-12-10 00:13:12.093663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.568  [2024-12-10 00:13:12.093677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.568  [2024-12-10 00:13:12.093690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000
00:31:56.568  [2024-12-10 00:13:12.093704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.568  [2024-12-10 00:13:12.093742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.568  [2024-12-10 00:13:12.093748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.093756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.569  [2024-12-10 00:13:12.093762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.093770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.569  [2024-12-10 00:13:12.093778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.093786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.569  [2024-12-10 00:13:12.093792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.093800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.569  [2024-12-10 00:13:12.093806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.093814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.569  [2024-12-10 00:13:12.093820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.093829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.569  [2024-12-10 00:13:12.093835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.093843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.569  [2024-12-10 00:13:12.093849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.093856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.569  [2024-12-10 00:13:12.093863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.093871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.569  [2024-12-10 00:13:12.093877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.093885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.569  [2024-12-10 00:13:12.093891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.093899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.569  [2024-12-10 00:13:12.093905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.093913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:31:56.569  [2024-12-10 00:13:12.093919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.093927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf72510 is same with the state(6) to be set
00:31:56.569  [2024-12-10 00:13:12.093935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:31:56.569  [2024-12-10 00:13:12.093940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:31:56.569  [2024-12-10 00:13:12.093946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110696 len:8 PRP1 0x0 PRP2 0x0
00:31:56.569  [2024-12-10 00:13:12.093954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:31:56.569  [2024-12-10 00:13:12.096792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.569  [2024-12-10 00:13:12.096845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.569  [2024-12-10 00:13:12.097360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.569  [2024-12-10 00:13:12.097376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.569  [2024-12-10 00:13:12.097384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.569  [2024-12-10 00:13:12.097558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.569  [2024-12-10 00:13:12.097732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.569  [2024-12-10 00:13:12.097740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.569  [2024-12-10 00:13:12.097748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.569  [2024-12-10 00:13:12.097756] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.569  [2024-12-10 00:13:12.109996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.569  [2024-12-10 00:13:12.110448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.569  [2024-12-10 00:13:12.110496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.569  [2024-12-10 00:13:12.110520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.569  [2024-12-10 00:13:12.111104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.569  [2024-12-10 00:13:12.111312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.569  [2024-12-10 00:13:12.111321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.569  [2024-12-10 00:13:12.111327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.569  [2024-12-10 00:13:12.111334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.569  [2024-12-10 00:13:12.123082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.569  [2024-12-10 00:13:12.123468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.569  [2024-12-10 00:13:12.123515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.569  [2024-12-10 00:13:12.123538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.569  [2024-12-10 00:13:12.124122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.569  [2024-12-10 00:13:12.124370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.569  [2024-12-10 00:13:12.124379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.569  [2024-12-10 00:13:12.124385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.569  [2024-12-10 00:13:12.124392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.569  [2024-12-10 00:13:12.135997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.569  [2024-12-10 00:13:12.136422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.569  [2024-12-10 00:13:12.136469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.570  [2024-12-10 00:13:12.136493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.570  [2024-12-10 00:13:12.137077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.570  [2024-12-10 00:13:12.137306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.570  [2024-12-10 00:13:12.137315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.570  [2024-12-10 00:13:12.137321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.570  [2024-12-10 00:13:12.137327] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.570  [2024-12-10 00:13:12.150932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.570  [2024-12-10 00:13:12.151458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.570  [2024-12-10 00:13:12.151513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.570  [2024-12-10 00:13:12.151537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.570  [2024-12-10 00:13:12.152119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.570  [2024-12-10 00:13:12.152407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.570  [2024-12-10 00:13:12.152420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.570  [2024-12-10 00:13:12.152429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.570  [2024-12-10 00:13:12.152438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.570  [2024-12-10 00:13:12.163941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.570  [2024-12-10 00:13:12.164324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.570  [2024-12-10 00:13:12.164341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.570  [2024-12-10 00:13:12.164348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.570  [2024-12-10 00:13:12.164516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.570  [2024-12-10 00:13:12.164684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.570  [2024-12-10 00:13:12.164692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.570  [2024-12-10 00:13:12.164699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.570  [2024-12-10 00:13:12.164705] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.570  [2024-12-10 00:13:12.176781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.570  [2024-12-10 00:13:12.177147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.570  [2024-12-10 00:13:12.177207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.570  [2024-12-10 00:13:12.177230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.570  [2024-12-10 00:13:12.177682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.570  [2024-12-10 00:13:12.177842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.570  [2024-12-10 00:13:12.177849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.570  [2024-12-10 00:13:12.177855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.570  [2024-12-10 00:13:12.177861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.570  [2024-12-10 00:13:12.189541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.570  [2024-12-10 00:13:12.189930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.570  [2024-12-10 00:13:12.189986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.570  [2024-12-10 00:13:12.190009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.570  [2024-12-10 00:13:12.190521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.570  [2024-12-10 00:13:12.190690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.570  [2024-12-10 00:13:12.190699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.570  [2024-12-10 00:13:12.190705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.570  [2024-12-10 00:13:12.190711] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.570  [2024-12-10 00:13:12.202490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.570  [2024-12-10 00:13:12.202909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.570  [2024-12-10 00:13:12.202925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.570  [2024-12-10 00:13:12.202932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.570  [2024-12-10 00:13:12.203100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.570  [2024-12-10 00:13:12.203276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.570  [2024-12-10 00:13:12.203284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.570  [2024-12-10 00:13:12.203290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.570  [2024-12-10 00:13:12.203296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.570  [2024-12-10 00:13:12.215332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.570  [2024-12-10 00:13:12.215696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.570  [2024-12-10 00:13:12.215712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.570  [2024-12-10 00:13:12.215719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.570  [2024-12-10 00:13:12.215877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.570  [2024-12-10 00:13:12.216036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.570  [2024-12-10 00:13:12.216047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.570  [2024-12-10 00:13:12.216053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.570  [2024-12-10 00:13:12.216059] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.571  [2024-12-10 00:13:12.228237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.571  [2024-12-10 00:13:12.228644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.571  [2024-12-10 00:13:12.228689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.571  [2024-12-10 00:13:12.228712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.571  [2024-12-10 00:13:12.229306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.571  [2024-12-10 00:13:12.229795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.571  [2024-12-10 00:13:12.229803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.571  [2024-12-10 00:13:12.229809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.571  [2024-12-10 00:13:12.229815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.571  [2024-12-10 00:13:12.241240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.571  [2024-12-10 00:13:12.241658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.571  [2024-12-10 00:13:12.241704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.571  [2024-12-10 00:13:12.241727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.571  [2024-12-10 00:13:12.242325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.571  [2024-12-10 00:13:12.242539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.571  [2024-12-10 00:13:12.242547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.571  [2024-12-10 00:13:12.242554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.571  [2024-12-10 00:13:12.242560] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.571  [2024-12-10 00:13:12.254103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.571  [2024-12-10 00:13:12.254448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.571  [2024-12-10 00:13:12.254465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.571  [2024-12-10 00:13:12.254472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.571  [2024-12-10 00:13:12.254640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.571  [2024-12-10 00:13:12.254808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.571  [2024-12-10 00:13:12.254816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.571  [2024-12-10 00:13:12.254823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.571  [2024-12-10 00:13:12.254832] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.571  [2024-12-10 00:13:12.266829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.571  [2024-12-10 00:13:12.267241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.571  [2024-12-10 00:13:12.267258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.571  [2024-12-10 00:13:12.267265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.571  [2024-12-10 00:13:12.267432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.571  [2024-12-10 00:13:12.267600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.571  [2024-12-10 00:13:12.267608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.571  [2024-12-10 00:13:12.267614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.571  [2024-12-10 00:13:12.267621] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.571  [2024-12-10 00:13:12.279577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.571  [2024-12-10 00:13:12.279950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.571  [2024-12-10 00:13:12.279965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.571  [2024-12-10 00:13:12.279972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.571  [2024-12-10 00:13:12.280130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.571  [2024-12-10 00:13:12.280317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.571  [2024-12-10 00:13:12.280325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.571  [2024-12-10 00:13:12.280332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.571  [2024-12-10 00:13:12.280338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.571  [2024-12-10 00:13:12.292309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.571  [2024-12-10 00:13:12.292725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.572  [2024-12-10 00:13:12.292742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.572  [2024-12-10 00:13:12.292749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.572  [2024-12-10 00:13:12.292917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.572  [2024-12-10 00:13:12.293085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.572  [2024-12-10 00:13:12.293093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.572  [2024-12-10 00:13:12.293100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.572  [2024-12-10 00:13:12.293106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.572  [2024-12-10 00:13:12.305258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.572  [2024-12-10 00:13:12.305660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.572  [2024-12-10 00:13:12.305705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.572  [2024-12-10 00:13:12.305728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.572  [2024-12-10 00:13:12.306327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.572  [2024-12-10 00:13:12.306883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.572  [2024-12-10 00:13:12.306891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.572  [2024-12-10 00:13:12.306897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.572  [2024-12-10 00:13:12.306903] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.572  [2024-12-10 00:13:12.318011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.572  [2024-12-10 00:13:12.318440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.572  [2024-12-10 00:13:12.318457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.572  [2024-12-10 00:13:12.318464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.572  [2024-12-10 00:13:12.318632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.572  [2024-12-10 00:13:12.318800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.572  [2024-12-10 00:13:12.318808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.572  [2024-12-10 00:13:12.318814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.572  [2024-12-10 00:13:12.318820] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.572  [2024-12-10 00:13:12.330799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.572  [2024-12-10 00:13:12.331184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.572  [2024-12-10 00:13:12.331201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.572  [2024-12-10 00:13:12.331208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.572  [2024-12-10 00:13:12.331375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.572  [2024-12-10 00:13:12.331543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.572  [2024-12-10 00:13:12.331552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.572  [2024-12-10 00:13:12.331558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.572  [2024-12-10 00:13:12.331564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.572  [2024-12-10 00:13:12.343641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.572  [2024-12-10 00:13:12.344076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.572  [2024-12-10 00:13:12.344092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.572  [2024-12-10 00:13:12.344100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.572  [2024-12-10 00:13:12.344289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.572  [2024-12-10 00:13:12.344459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.572  [2024-12-10 00:13:12.344468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.572  [2024-12-10 00:13:12.344474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.572  [2024-12-10 00:13:12.344480] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.572  [2024-12-10 00:13:12.356652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.572  [2024-12-10 00:13:12.357014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.572  [2024-12-10 00:13:12.357060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.572  [2024-12-10 00:13:12.357083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.572  [2024-12-10 00:13:12.357679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.572  [2024-12-10 00:13:12.358153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.572  [2024-12-10 00:13:12.358161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.572  [2024-12-10 00:13:12.358170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.572  [2024-12-10 00:13:12.358177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.572  [2024-12-10 00:13:12.369620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.572  [2024-12-10 00:13:12.370016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.572  [2024-12-10 00:13:12.370060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.572  [2024-12-10 00:13:12.370083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.572  [2024-12-10 00:13:12.370656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.572  [2024-12-10 00:13:12.371048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.572  [2024-12-10 00:13:12.371065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.572  [2024-12-10 00:13:12.371079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.572  [2024-12-10 00:13:12.371092] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.573  [2024-12-10 00:13:12.384472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.573  [2024-12-10 00:13:12.384986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.573  [2024-12-10 00:13:12.385008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.573  [2024-12-10 00:13:12.385018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.573  [2024-12-10 00:13:12.385280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.573  [2024-12-10 00:13:12.385535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.573  [2024-12-10 00:13:12.385550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.573  [2024-12-10 00:13:12.385560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.573  [2024-12-10 00:13:12.385569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.573  [2024-12-10 00:13:12.397589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.573  [2024-12-10 00:13:12.397998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.573  [2024-12-10 00:13:12.398014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.573  [2024-12-10 00:13:12.398022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.573  [2024-12-10 00:13:12.398200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.573  [2024-12-10 00:13:12.398373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.573  [2024-12-10 00:13:12.398381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.573  [2024-12-10 00:13:12.398388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.573  [2024-12-10 00:13:12.398394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.573  [2024-12-10 00:13:12.410369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.573  [2024-12-10 00:13:12.410799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.573  [2024-12-10 00:13:12.410844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.573  [2024-12-10 00:13:12.410867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.573  [2024-12-10 00:13:12.411307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.573  [2024-12-10 00:13:12.411476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.573  [2024-12-10 00:13:12.411484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.573  [2024-12-10 00:13:12.411491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.573  [2024-12-10 00:13:12.411497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.833  [2024-12-10 00:13:12.423266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.833  [2024-12-10 00:13:12.423673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.833  [2024-12-10 00:13:12.423689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.833  [2024-12-10 00:13:12.423696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.833  [2024-12-10 00:13:12.423864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.833  [2024-12-10 00:13:12.424032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.834  [2024-12-10 00:13:12.424040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.834  [2024-12-10 00:13:12.424046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.834  [2024-12-10 00:13:12.424056] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.834  [2024-12-10 00:13:12.436158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.834  [2024-12-10 00:13:12.436582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.834  [2024-12-10 00:13:12.436599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.834  [2024-12-10 00:13:12.436606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.834  [2024-12-10 00:13:12.436773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.834  [2024-12-10 00:13:12.436940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.834  [2024-12-10 00:13:12.436948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.834  [2024-12-10 00:13:12.436954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.834  [2024-12-10 00:13:12.436960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.834      10037.00 IOPS,    39.21 MiB/s
[2024-12-09T23:13:12.691Z] [2024-12-10 00:13:12.448989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.834  [2024-12-10 00:13:12.449411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.834  [2024-12-10 00:13:12.449428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.834  [2024-12-10 00:13:12.449435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.834  [2024-12-10 00:13:12.449603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.834  [2024-12-10 00:13:12.449771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.834  [2024-12-10 00:13:12.449780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.834  [2024-12-10 00:13:12.449786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.834  [2024-12-10 00:13:12.449792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.834  [2024-12-10 00:13:12.461804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.834  [2024-12-10 00:13:12.462195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.834  [2024-12-10 00:13:12.462211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.834  [2024-12-10 00:13:12.462218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.834  [2024-12-10 00:13:12.462376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.834  [2024-12-10 00:13:12.462535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.834  [2024-12-10 00:13:12.462543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.834  [2024-12-10 00:13:12.462548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.834  [2024-12-10 00:13:12.462554] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.834  [2024-12-10 00:13:12.474635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.834  [2024-12-10 00:13:12.475029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.834  [2024-12-10 00:13:12.475044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.834  [2024-12-10 00:13:12.475051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.834  [2024-12-10 00:13:12.475233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.834  [2024-12-10 00:13:12.475401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.834  [2024-12-10 00:13:12.475409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.834  [2024-12-10 00:13:12.475415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.834  [2024-12-10 00:13:12.475421] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.834  [2024-12-10 00:13:12.487456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.834  [2024-12-10 00:13:12.487868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.834  [2024-12-10 00:13:12.487884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.834  [2024-12-10 00:13:12.487891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.834  [2024-12-10 00:13:12.488059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.834  [2024-12-10 00:13:12.488234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.834  [2024-12-10 00:13:12.488243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.834  [2024-12-10 00:13:12.488249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.834  [2024-12-10 00:13:12.488255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.834  [2024-12-10 00:13:12.500204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.834  [2024-12-10 00:13:12.500600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.834  [2024-12-10 00:13:12.500616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.834  [2024-12-10 00:13:12.500623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.834  [2024-12-10 00:13:12.500790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.834  [2024-12-10 00:13:12.500959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.834  [2024-12-10 00:13:12.500967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.834  [2024-12-10 00:13:12.500973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.834  [2024-12-10 00:13:12.500979] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.834  [2024-12-10 00:13:12.512928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.834  [2024-12-10 00:13:12.513367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.834  [2024-12-10 00:13:12.513384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.834  [2024-12-10 00:13:12.513391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.834  [2024-12-10 00:13:12.513562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.834  [2024-12-10 00:13:12.513734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.834  [2024-12-10 00:13:12.513743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.834  [2024-12-10 00:13:12.513749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.834  [2024-12-10 00:13:12.513755] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.834  [2024-12-10 00:13:12.526047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.834  [2024-12-10 00:13:12.526464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.834  [2024-12-10 00:13:12.526511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.834  [2024-12-10 00:13:12.526536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.834  [2024-12-10 00:13:12.527049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.834  [2024-12-10 00:13:12.527227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.835  [2024-12-10 00:13:12.527236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.835  [2024-12-10 00:13:12.527243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.835  [2024-12-10 00:13:12.527249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.835  [2024-12-10 00:13:12.538897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.835  [2024-12-10 00:13:12.539322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.835  [2024-12-10 00:13:12.539339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.835  [2024-12-10 00:13:12.539346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.835  [2024-12-10 00:13:12.539514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.835  [2024-12-10 00:13:12.539683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.835  [2024-12-10 00:13:12.539690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.835  [2024-12-10 00:13:12.539696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.835  [2024-12-10 00:13:12.539702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.835  [2024-12-10 00:13:12.551840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.835  [2024-12-10 00:13:12.552265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.835  [2024-12-10 00:13:12.552310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.835  [2024-12-10 00:13:12.552333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.835  [2024-12-10 00:13:12.552914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.835  [2024-12-10 00:13:12.553267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.835  [2024-12-10 00:13:12.553281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.835  [2024-12-10 00:13:12.553288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.835  [2024-12-10 00:13:12.553294] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.835  [2024-12-10 00:13:12.564680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.835  [2024-12-10 00:13:12.565057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.835  [2024-12-10 00:13:12.565074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.835  [2024-12-10 00:13:12.565081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.835  [2024-12-10 00:13:12.565255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.835  [2024-12-10 00:13:12.565423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.835  [2024-12-10 00:13:12.565432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.835  [2024-12-10 00:13:12.565438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.835  [2024-12-10 00:13:12.565444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.835  [2024-12-10 00:13:12.577525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.835  [2024-12-10 00:13:12.577913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.835  [2024-12-10 00:13:12.577929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.835  [2024-12-10 00:13:12.577935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.835  [2024-12-10 00:13:12.578094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.835  [2024-12-10 00:13:12.578276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.835  [2024-12-10 00:13:12.578285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.835  [2024-12-10 00:13:12.578291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.835  [2024-12-10 00:13:12.578298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.835  [2024-12-10 00:13:12.590282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.835  [2024-12-10 00:13:12.590716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.835  [2024-12-10 00:13:12.590761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.835  [2024-12-10 00:13:12.590785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.835  [2024-12-10 00:13:12.591285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.835  [2024-12-10 00:13:12.591454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.835  [2024-12-10 00:13:12.591462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.835  [2024-12-10 00:13:12.591468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.835  [2024-12-10 00:13:12.591477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.835  [2024-12-10 00:13:12.603194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.835  [2024-12-10 00:13:12.603627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.835  [2024-12-10 00:13:12.603644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.835  [2024-12-10 00:13:12.603651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.835  [2024-12-10 00:13:12.603819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.835  [2024-12-10 00:13:12.603986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.835  [2024-12-10 00:13:12.603995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.835  [2024-12-10 00:13:12.604001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.835  [2024-12-10 00:13:12.604007] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.835  [2024-12-10 00:13:12.616248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.835  [2024-12-10 00:13:12.616673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.835  [2024-12-10 00:13:12.616689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.835  [2024-12-10 00:13:12.616696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.835  [2024-12-10 00:13:12.616863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.835  [2024-12-10 00:13:12.617032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.835  [2024-12-10 00:13:12.617040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.835  [2024-12-10 00:13:12.617046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.835  [2024-12-10 00:13:12.617053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.835  [2024-12-10 00:13:12.629097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.835  [2024-12-10 00:13:12.629491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.835  [2024-12-10 00:13:12.629508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.835  [2024-12-10 00:13:12.629516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.835  [2024-12-10 00:13:12.629684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.835  [2024-12-10 00:13:12.629853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.835  [2024-12-10 00:13:12.629861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.835  [2024-12-10 00:13:12.629867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.836  [2024-12-10 00:13:12.629873] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.836  [2024-12-10 00:13:12.641877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.836  [2024-12-10 00:13:12.642297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.836  [2024-12-10 00:13:12.642316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.836  [2024-12-10 00:13:12.642324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.836  [2024-12-10 00:13:12.642492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.836  [2024-12-10 00:13:12.642660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.836  [2024-12-10 00:13:12.642667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.836  [2024-12-10 00:13:12.642674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.836  [2024-12-10 00:13:12.642680] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.836  [2024-12-10 00:13:12.654674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.836  [2024-12-10 00:13:12.655026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.836  [2024-12-10 00:13:12.655061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.836  [2024-12-10 00:13:12.655087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.836  [2024-12-10 00:13:12.655652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.836  [2024-12-10 00:13:12.655820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.836  [2024-12-10 00:13:12.655828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.836  [2024-12-10 00:13:12.655834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.836  [2024-12-10 00:13:12.655840] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.836  [2024-12-10 00:13:12.667523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.836  [2024-12-10 00:13:12.667924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.836  [2024-12-10 00:13:12.667968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.836  [2024-12-10 00:13:12.667992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.836  [2024-12-10 00:13:12.668445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.836  [2024-12-10 00:13:12.668635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.836  [2024-12-10 00:13:12.668643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.836  [2024-12-10 00:13:12.668649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.836  [2024-12-10 00:13:12.668655] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:56.836  [2024-12-10 00:13:12.680400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:56.836  [2024-12-10 00:13:12.680810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:56.836  [2024-12-10 00:13:12.680855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:56.836  [2024-12-10 00:13:12.680878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:56.836  [2024-12-10 00:13:12.681483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:56.836  [2024-12-10 00:13:12.681674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:56.836  [2024-12-10 00:13:12.681682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:56.836  [2024-12-10 00:13:12.681688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:56.836  [2024-12-10 00:13:12.681694] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.096  [2024-12-10 00:13:12.693323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.096  [2024-12-10 00:13:12.693745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.096  [2024-12-10 00:13:12.693789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.096  [2024-12-10 00:13:12.693813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.096  [2024-12-10 00:13:12.694410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.096  [2024-12-10 00:13:12.694858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.096  [2024-12-10 00:13:12.694866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.096  [2024-12-10 00:13:12.694872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.096  [2024-12-10 00:13:12.694879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.096  [2024-12-10 00:13:12.706093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.096  [2024-12-10 00:13:12.706523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.096  [2024-12-10 00:13:12.706540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.096  [2024-12-10 00:13:12.706547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.096  [2024-12-10 00:13:12.706716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.096  [2024-12-10 00:13:12.706884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.096  [2024-12-10 00:13:12.706892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.096  [2024-12-10 00:13:12.706899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.096  [2024-12-10 00:13:12.706905] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.096  [2024-12-10 00:13:12.718924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.096  [2024-12-10 00:13:12.719361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.096  [2024-12-10 00:13:12.719378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.096  [2024-12-10 00:13:12.719386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.096  [2024-12-10 00:13:12.719556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.096  [2024-12-10 00:13:12.719717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.096  [2024-12-10 00:13:12.719727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.096  [2024-12-10 00:13:12.719733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.096  [2024-12-10 00:13:12.719739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.096  [2024-12-10 00:13:12.731836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.096  [2024-12-10 00:13:12.732288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.096  [2024-12-10 00:13:12.732305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.096  [2024-12-10 00:13:12.732312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.096  [2024-12-10 00:13:12.732481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.096  [2024-12-10 00:13:12.732649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.096  [2024-12-10 00:13:12.732658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.096  [2024-12-10 00:13:12.732664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.096  [2024-12-10 00:13:12.732670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.096  [2024-12-10 00:13:12.744762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.096  [2024-12-10 00:13:12.745205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.096  [2024-12-10 00:13:12.745222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.096  [2024-12-10 00:13:12.745230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.096  [2024-12-10 00:13:12.745398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.096  [2024-12-10 00:13:12.745567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.096  [2024-12-10 00:13:12.745575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.096  [2024-12-10 00:13:12.745581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.096  [2024-12-10 00:13:12.745588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.096  [2024-12-10 00:13:12.757511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.096  [2024-12-10 00:13:12.757928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.096  [2024-12-10 00:13:12.757945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.096  [2024-12-10 00:13:12.757952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.096  [2024-12-10 00:13:12.758120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.096  [2024-12-10 00:13:12.758297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.096  [2024-12-10 00:13:12.758306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.096  [2024-12-10 00:13:12.758313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.096  [2024-12-10 00:13:12.758319] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.096  [2024-12-10 00:13:12.770454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.096  [2024-12-10 00:13:12.770818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.096  [2024-12-10 00:13:12.770834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.096  [2024-12-10 00:13:12.770841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.096  [2024-12-10 00:13:12.771009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.096  [2024-12-10 00:13:12.771183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.096  [2024-12-10 00:13:12.771192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.096  [2024-12-10 00:13:12.771198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.096  [2024-12-10 00:13:12.771204] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.096  [2024-12-10 00:13:12.783282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.096  [2024-12-10 00:13:12.783700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.096  [2024-12-10 00:13:12.783716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.096  [2024-12-10 00:13:12.783724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.096  [2024-12-10 00:13:12.783891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.096  [2024-12-10 00:13:12.784060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.096  [2024-12-10 00:13:12.784068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.096  [2024-12-10 00:13:12.784074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.096  [2024-12-10 00:13:12.784080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.096  [2024-12-10 00:13:12.796144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.096  [2024-12-10 00:13:12.796608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.097  [2024-12-10 00:13:12.796625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.097  [2024-12-10 00:13:12.796632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.097  [2024-12-10 00:13:12.796800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.097  [2024-12-10 00:13:12.796968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.097  [2024-12-10 00:13:12.796976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.097  [2024-12-10 00:13:12.796984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.097  [2024-12-10 00:13:12.796990] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.097  [2024-12-10 00:13:12.809105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.097  [2024-12-10 00:13:12.809526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.097  [2024-12-10 00:13:12.809546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.097  [2024-12-10 00:13:12.809553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.097  [2024-12-10 00:13:12.809720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.097  [2024-12-10 00:13:12.809888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.097  [2024-12-10 00:13:12.809896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.097  [2024-12-10 00:13:12.809902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.097  [2024-12-10 00:13:12.809908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.097  [2024-12-10 00:13:12.821935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.097  [2024-12-10 00:13:12.822349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.097  [2024-12-10 00:13:12.822366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.097  [2024-12-10 00:13:12.822374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.097  [2024-12-10 00:13:12.822541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.097  [2024-12-10 00:13:12.822710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.097  [2024-12-10 00:13:12.822718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.097  [2024-12-10 00:13:12.822725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.097  [2024-12-10 00:13:12.822731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.097  [2024-12-10 00:13:12.834808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.097  [2024-12-10 00:13:12.835195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.097  [2024-12-10 00:13:12.835212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.097  [2024-12-10 00:13:12.835219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.097  [2024-12-10 00:13:12.835378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.097  [2024-12-10 00:13:12.835537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.097  [2024-12-10 00:13:12.835544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.097  [2024-12-10 00:13:12.835551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.097  [2024-12-10 00:13:12.835557] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.097  [2024-12-10 00:13:12.847649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.097  [2024-12-10 00:13:12.848059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.097  [2024-12-10 00:13:12.848075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.097  [2024-12-10 00:13:12.848082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.097  [2024-12-10 00:13:12.848260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.097  [2024-12-10 00:13:12.848428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.097  [2024-12-10 00:13:12.848436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.097  [2024-12-10 00:13:12.848442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.097  [2024-12-10 00:13:12.848448] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.097  [2024-12-10 00:13:12.860455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.097  [2024-12-10 00:13:12.860871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.097  [2024-12-10 00:13:12.860916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.097  [2024-12-10 00:13:12.860939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.097  [2024-12-10 00:13:12.861534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.097  [2024-12-10 00:13:12.862121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.097  [2024-12-10 00:13:12.862146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.097  [2024-12-10 00:13:12.862183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.097  [2024-12-10 00:13:12.862190] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.097  [2024-12-10 00:13:12.873539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.097  [2024-12-10 00:13:12.873982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.097  [2024-12-10 00:13:12.873999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.097  [2024-12-10 00:13:12.874007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.097  [2024-12-10 00:13:12.874185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.097  [2024-12-10 00:13:12.874359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.097  [2024-12-10 00:13:12.874367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.097  [2024-12-10 00:13:12.874374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.097  [2024-12-10 00:13:12.874380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.097  [2024-12-10 00:13:12.886445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.097  [2024-12-10 00:13:12.886916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.097  [2024-12-10 00:13:12.886960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.097  [2024-12-10 00:13:12.886983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.097  [2024-12-10 00:13:12.887576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.097  [2024-12-10 00:13:12.888169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.097  [2024-12-10 00:13:12.888179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.097  [2024-12-10 00:13:12.888188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.097  [2024-12-10 00:13:12.888195] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.097  [2024-12-10 00:13:12.899440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.097  [2024-12-10 00:13:12.899864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.097  [2024-12-10 00:13:12.899909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.097  [2024-12-10 00:13:12.899933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.098  [2024-12-10 00:13:12.900141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.098  [2024-12-10 00:13:12.900319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.098  [2024-12-10 00:13:12.900328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.098  [2024-12-10 00:13:12.900334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.098  [2024-12-10 00:13:12.900340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.098  [2024-12-10 00:13:12.912319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.098  [2024-12-10 00:13:12.912678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.098  [2024-12-10 00:13:12.912721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.098  [2024-12-10 00:13:12.912744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.098  [2024-12-10 00:13:12.913275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.098  [2024-12-10 00:13:12.913444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.098  [2024-12-10 00:13:12.913453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.098  [2024-12-10 00:13:12.913459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.098  [2024-12-10 00:13:12.913465] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.098  [2024-12-10 00:13:12.925214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.098  [2024-12-10 00:13:12.925515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.098  [2024-12-10 00:13:12.925560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.098  [2024-12-10 00:13:12.925583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.098  [2024-12-10 00:13:12.926158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.098  [2024-12-10 00:13:12.926334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.098  [2024-12-10 00:13:12.926343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.098  [2024-12-10 00:13:12.926349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.098  [2024-12-10 00:13:12.926355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.098  [2024-12-10 00:13:12.938120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.098  [2024-12-10 00:13:12.938489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.098  [2024-12-10 00:13:12.938534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.098  [2024-12-10 00:13:12.938558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.098  [2024-12-10 00:13:12.939113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.098  [2024-12-10 00:13:12.939288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.098  [2024-12-10 00:13:12.939298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.098  [2024-12-10 00:13:12.939304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.098  [2024-12-10 00:13:12.939310] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.098  [2024-12-10 00:13:12.951094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.098  [2024-12-10 00:13:12.951515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.098  [2024-12-10 00:13:12.951533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.098  [2024-12-10 00:13:12.951540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.098  [2024-12-10 00:13:12.951713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.098  [2024-12-10 00:13:12.951887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.098  [2024-12-10 00:13:12.951895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.098  [2024-12-10 00:13:12.951902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.098  [2024-12-10 00:13:12.951908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.358  [2024-12-10 00:13:12.964046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.358  [2024-12-10 00:13:12.964353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.358  [2024-12-10 00:13:12.964400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.358  [2024-12-10 00:13:12.964424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.358  [2024-12-10 00:13:12.965005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.358  [2024-12-10 00:13:12.965492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.358  [2024-12-10 00:13:12.965501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.358  [2024-12-10 00:13:12.965507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.358  [2024-12-10 00:13:12.965513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.358  [2024-12-10 00:13:12.976942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.358  [2024-12-10 00:13:12.977383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.358  [2024-12-10 00:13:12.977430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.359  [2024-12-10 00:13:12.977461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.359  [2024-12-10 00:13:12.978045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.359  [2024-12-10 00:13:12.978371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.359  [2024-12-10 00:13:12.978379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.359  [2024-12-10 00:13:12.978385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.359  [2024-12-10 00:13:12.978391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.359  [2024-12-10 00:13:12.989841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.359  [2024-12-10 00:13:12.990278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.359  [2024-12-10 00:13:12.990295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.359  [2024-12-10 00:13:12.990303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.359  [2024-12-10 00:13:12.990473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.359  [2024-12-10 00:13:12.990633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.359  [2024-12-10 00:13:12.990641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.359  [2024-12-10 00:13:12.990647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.359  [2024-12-10 00:13:12.990653] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.359  [2024-12-10 00:13:13.002824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.359  [2024-12-10 00:13:13.003248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.359  [2024-12-10 00:13:13.003265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.359  [2024-12-10 00:13:13.003272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.359  [2024-12-10 00:13:13.003449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.359  [2024-12-10 00:13:13.003607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.359  [2024-12-10 00:13:13.003615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.359  [2024-12-10 00:13:13.003621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.359  [2024-12-10 00:13:13.003627] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.359  [2024-12-10 00:13:13.015654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.359  [2024-12-10 00:13:13.016077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.359  [2024-12-10 00:13:13.016094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.359  [2024-12-10 00:13:13.016101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.359  [2024-12-10 00:13:13.016272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.359  [2024-12-10 00:13:13.016445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.359  [2024-12-10 00:13:13.016453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.359  [2024-12-10 00:13:13.016460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.359  [2024-12-10 00:13:13.016466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.359  [2024-12-10 00:13:13.028532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.359  [2024-12-10 00:13:13.028965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.359  [2024-12-10 00:13:13.029004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.359  [2024-12-10 00:13:13.029029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.359  [2024-12-10 00:13:13.029573] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.359  [2024-12-10 00:13:13.029734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.359  [2024-12-10 00:13:13.029742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.359  [2024-12-10 00:13:13.029748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.359  [2024-12-10 00:13:13.029753] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.359  [2024-12-10 00:13:13.041349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.359  [2024-12-10 00:13:13.041723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.359  [2024-12-10 00:13:13.041739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.359  [2024-12-10 00:13:13.041747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.359  [2024-12-10 00:13:13.041915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.359  [2024-12-10 00:13:13.042083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.359  [2024-12-10 00:13:13.042092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.359  [2024-12-10 00:13:13.042098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.359  [2024-12-10 00:13:13.042104] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.359  [2024-12-10 00:13:13.054104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.359  [2024-12-10 00:13:13.054428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.359  [2024-12-10 00:13:13.054445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.359  [2024-12-10 00:13:13.054452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.359  [2024-12-10 00:13:13.054624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.359  [2024-12-10 00:13:13.054798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.359  [2024-12-10 00:13:13.054806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.359  [2024-12-10 00:13:13.054815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.359  [2024-12-10 00:13:13.054822] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.359  [2024-12-10 00:13:13.067007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.359  [2024-12-10 00:13:13.067348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.359  [2024-12-10 00:13:13.067364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.359  [2024-12-10 00:13:13.067371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.359  [2024-12-10 00:13:13.067538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.359  [2024-12-10 00:13:13.067707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.359  [2024-12-10 00:13:13.067717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.359  [2024-12-10 00:13:13.067725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.359  [2024-12-10 00:13:13.067731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.359  [2024-12-10 00:13:13.079962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.359  [2024-12-10 00:13:13.080380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.359  [2024-12-10 00:13:13.080425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.359  [2024-12-10 00:13:13.080448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.360  [2024-12-10 00:13:13.081029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.360  [2024-12-10 00:13:13.081605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.360  [2024-12-10 00:13:13.081613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.360  [2024-12-10 00:13:13.081620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.360  [2024-12-10 00:13:13.081626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.360  [2024-12-10 00:13:13.092974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.360  [2024-12-10 00:13:13.093400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.360  [2024-12-10 00:13:13.093416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.360  [2024-12-10 00:13:13.093423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.360  [2024-12-10 00:13:13.093596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.360  [2024-12-10 00:13:13.093769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.360  [2024-12-10 00:13:13.093777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.360  [2024-12-10 00:13:13.093784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.360  [2024-12-10 00:13:13.093790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.360  [2024-12-10 00:13:13.106010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.360  [2024-12-10 00:13:13.106455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.360  [2024-12-10 00:13:13.106473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.360  [2024-12-10 00:13:13.106480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.360  [2024-12-10 00:13:13.106664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.360  [2024-12-10 00:13:13.106849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.360  [2024-12-10 00:13:13.106858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.360  [2024-12-10 00:13:13.106864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.360  [2024-12-10 00:13:13.106871] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.360  [2024-12-10 00:13:13.119226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.360  [2024-12-10 00:13:13.119523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.360  [2024-12-10 00:13:13.119541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.360  [2024-12-10 00:13:13.119549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.360  [2024-12-10 00:13:13.119732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.360  [2024-12-10 00:13:13.119917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.360  [2024-12-10 00:13:13.119926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.360  [2024-12-10 00:13:13.119933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.360  [2024-12-10 00:13:13.119940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.360  [2024-12-10 00:13:13.132417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.360  [2024-12-10 00:13:13.132763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.360  [2024-12-10 00:13:13.132779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.360  [2024-12-10 00:13:13.132787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.360  [2024-12-10 00:13:13.132959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.360  [2024-12-10 00:13:13.133133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.360  [2024-12-10 00:13:13.133141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.360  [2024-12-10 00:13:13.133147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.360  [2024-12-10 00:13:13.133153] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.360  [2024-12-10 00:13:13.145421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.360  [2024-12-10 00:13:13.145852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.360  [2024-12-10 00:13:13.145869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.360  [2024-12-10 00:13:13.145880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.360  [2024-12-10 00:13:13.146052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.360  [2024-12-10 00:13:13.146232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.360  [2024-12-10 00:13:13.146241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.360  [2024-12-10 00:13:13.146248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.360  [2024-12-10 00:13:13.146254] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.360  [2024-12-10 00:13:13.158695] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.360  [2024-12-10 00:13:13.159137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.360  [2024-12-10 00:13:13.159154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.360  [2024-12-10 00:13:13.159162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.360  [2024-12-10 00:13:13.159352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.360  [2024-12-10 00:13:13.159536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.360  [2024-12-10 00:13:13.159544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.360  [2024-12-10 00:13:13.159551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.360  [2024-12-10 00:13:13.159558] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.360  [2024-12-10 00:13:13.171844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.360  [2024-12-10 00:13:13.172292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.360  [2024-12-10 00:13:13.172310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.360  [2024-12-10 00:13:13.172318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.360  [2024-12-10 00:13:13.172502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.360  [2024-12-10 00:13:13.172686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.360  [2024-12-10 00:13:13.172695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.360  [2024-12-10 00:13:13.172702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.360  [2024-12-10 00:13:13.172708] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.360  [2024-12-10 00:13:13.185159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.360  [2024-12-10 00:13:13.185607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.360  [2024-12-10 00:13:13.185624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.360  [2024-12-10 00:13:13.185631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.360  [2024-12-10 00:13:13.185815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.361  [2024-12-10 00:13:13.186004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.361  [2024-12-10 00:13:13.186013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.361  [2024-12-10 00:13:13.186020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.361  [2024-12-10 00:13:13.186026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.361  [2024-12-10 00:13:13.198320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.361  [2024-12-10 00:13:13.198704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.361  [2024-12-10 00:13:13.198748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.361  [2024-12-10 00:13:13.198771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.361  [2024-12-10 00:13:13.199258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.361  [2024-12-10 00:13:13.199450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.361  [2024-12-10 00:13:13.199458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.361  [2024-12-10 00:13:13.199464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.361  [2024-12-10 00:13:13.199470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.361  [2024-12-10 00:13:13.211392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.361  [2024-12-10 00:13:13.211737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.361  [2024-12-10 00:13:13.211753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.361  [2024-12-10 00:13:13.211760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.361  [2024-12-10 00:13:13.211933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.361  [2024-12-10 00:13:13.212105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.361  [2024-12-10 00:13:13.212113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.361  [2024-12-10 00:13:13.212119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.361  [2024-12-10 00:13:13.212125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.621  [2024-12-10 00:13:13.224427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.621  [2024-12-10 00:13:13.224784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.621  [2024-12-10 00:13:13.224802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.621  [2024-12-10 00:13:13.224809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.621  [2024-12-10 00:13:13.224981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.621  [2024-12-10 00:13:13.225156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.621  [2024-12-10 00:13:13.225164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.621  [2024-12-10 00:13:13.225182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.621  [2024-12-10 00:13:13.225188] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.621  [2024-12-10 00:13:13.237278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.621  [2024-12-10 00:13:13.237534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.621  [2024-12-10 00:13:13.237550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.621  [2024-12-10 00:13:13.237557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.621  [2024-12-10 00:13:13.237725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.621  [2024-12-10 00:13:13.237893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.621  [2024-12-10 00:13:13.237901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.621  [2024-12-10 00:13:13.237907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.621  [2024-12-10 00:13:13.237913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.621  [2024-12-10 00:13:13.250185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.621  [2024-12-10 00:13:13.250533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.621  [2024-12-10 00:13:13.250549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.621  [2024-12-10 00:13:13.250556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.621  [2024-12-10 00:13:13.250724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.621  [2024-12-10 00:13:13.250892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.621  [2024-12-10 00:13:13.250901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.621  [2024-12-10 00:13:13.250907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.621  [2024-12-10 00:13:13.250913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.621  [2024-12-10 00:13:13.263098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.622  [2024-12-10 00:13:13.263459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.622  [2024-12-10 00:13:13.263476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.622  [2024-12-10 00:13:13.263483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.622  [2024-12-10 00:13:13.263651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.622  [2024-12-10 00:13:13.263819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.622  [2024-12-10 00:13:13.263827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.622  [2024-12-10 00:13:13.263833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.622  [2024-12-10 00:13:13.263839] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.622  [2024-12-10 00:13:13.275960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.622  [2024-12-10 00:13:13.276319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.622  [2024-12-10 00:13:13.276335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.622  [2024-12-10 00:13:13.276342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.622  [2024-12-10 00:13:13.276515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.622  [2024-12-10 00:13:13.276688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.622  [2024-12-10 00:13:13.276696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.622  [2024-12-10 00:13:13.276702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.622  [2024-12-10 00:13:13.276708] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.622  [2024-12-10 00:13:13.288876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.622  [2024-12-10 00:13:13.289302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.622  [2024-12-10 00:13:13.289332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.622  [2024-12-10 00:13:13.289356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.622  [2024-12-10 00:13:13.289938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.622  [2024-12-10 00:13:13.290174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.622  [2024-12-10 00:13:13.290182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.622  [2024-12-10 00:13:13.290188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.622  [2024-12-10 00:13:13.290195] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.622  [2024-12-10 00:13:13.303883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.622  [2024-12-10 00:13:13.304390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.622  [2024-12-10 00:13:13.304412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.622  [2024-12-10 00:13:13.304422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.622  [2024-12-10 00:13:13.304675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.622  [2024-12-10 00:13:13.304930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.622  [2024-12-10 00:13:13.304942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.622  [2024-12-10 00:13:13.304951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.622  [2024-12-10 00:13:13.304960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.622  [2024-12-10 00:13:13.316996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.622  [2024-12-10 00:13:13.317432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.622  [2024-12-10 00:13:13.317449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.622  [2024-12-10 00:13:13.317459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.622  [2024-12-10 00:13:13.317631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.622  [2024-12-10 00:13:13.317805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.622  [2024-12-10 00:13:13.317813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.622  [2024-12-10 00:13:13.317819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.622  [2024-12-10 00:13:13.317826] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.622  [2024-12-10 00:13:13.329749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.622  [2024-12-10 00:13:13.330190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.622  [2024-12-10 00:13:13.330206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.622  [2024-12-10 00:13:13.330213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.622  [2024-12-10 00:13:13.330381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.623  [2024-12-10 00:13:13.330550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.623  [2024-12-10 00:13:13.330558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.623  [2024-12-10 00:13:13.330564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.623  [2024-12-10 00:13:13.330570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.623  [2024-12-10 00:13:13.342507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.623  [2024-12-10 00:13:13.342901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.623  [2024-12-10 00:13:13.342916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.623  [2024-12-10 00:13:13.342923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.623  [2024-12-10 00:13:13.343082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.623  [2024-12-10 00:13:13.343265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.623  [2024-12-10 00:13:13.343273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.623  [2024-12-10 00:13:13.343280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.623  [2024-12-10 00:13:13.343286] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.623  [2024-12-10 00:13:13.355241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.623  [2024-12-10 00:13:13.355660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.623  [2024-12-10 00:13:13.355675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.623  [2024-12-10 00:13:13.355682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.623  [2024-12-10 00:13:13.355840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.623  [2024-12-10 00:13:13.356002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.623  [2024-12-10 00:13:13.356010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.623  [2024-12-10 00:13:13.356016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.623  [2024-12-10 00:13:13.356021] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.623  [2024-12-10 00:13:13.368019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.623  [2024-12-10 00:13:13.368374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.623  [2024-12-10 00:13:13.368391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.623  [2024-12-10 00:13:13.368398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.623  [2024-12-10 00:13:13.368566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.623  [2024-12-10 00:13:13.368734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.623  [2024-12-10 00:13:13.368742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.623  [2024-12-10 00:13:13.368748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.623  [2024-12-10 00:13:13.368754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.623  [2024-12-10 00:13:13.380867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.623  [2024-12-10 00:13:13.381290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.623  [2024-12-10 00:13:13.381307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.623  [2024-12-10 00:13:13.381314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.623  [2024-12-10 00:13:13.381481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.623  [2024-12-10 00:13:13.381649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.623  [2024-12-10 00:13:13.381658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.623  [2024-12-10 00:13:13.381665] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.623  [2024-12-10 00:13:13.381671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.623  [2024-12-10 00:13:13.393979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.623  [2024-12-10 00:13:13.394322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.623  [2024-12-10 00:13:13.394339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.623  [2024-12-10 00:13:13.394347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.623  [2024-12-10 00:13:13.394519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.623  [2024-12-10 00:13:13.394703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.623  [2024-12-10 00:13:13.394711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.623  [2024-12-10 00:13:13.394720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.623  [2024-12-10 00:13:13.394727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.623  [2024-12-10 00:13:13.406837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.623  [2024-12-10 00:13:13.407268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.623  [2024-12-10 00:13:13.407314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.623  [2024-12-10 00:13:13.407337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.623  [2024-12-10 00:13:13.407920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.623  [2024-12-10 00:13:13.408501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.623  [2024-12-10 00:13:13.408509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.623  [2024-12-10 00:13:13.408515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.623  [2024-12-10 00:13:13.408521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.623  [2024-12-10 00:13:13.419616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.623  [2024-12-10 00:13:13.420045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.623  [2024-12-10 00:13:13.420088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.623  [2024-12-10 00:13:13.420111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.623  [2024-12-10 00:13:13.420557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.623  [2024-12-10 00:13:13.420726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.623  [2024-12-10 00:13:13.420733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.623  [2024-12-10 00:13:13.420740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.623  [2024-12-10 00:13:13.420746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.623  [2024-12-10 00:13:13.432467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.623  [2024-12-10 00:13:13.432915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.623  [2024-12-10 00:13:13.432966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.623  [2024-12-10 00:13:13.432989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.623  [2024-12-10 00:13:13.433546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.624  [2024-12-10 00:13:13.433716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.624  [2024-12-10 00:13:13.433723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.624  [2024-12-10 00:13:13.433729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.624  [2024-12-10 00:13:13.433735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.624       7527.75 IOPS,    29.41 MiB/s
[2024-12-09T23:13:13.481Z] [2024-12-10 00:13:13.446419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.624  [2024-12-10 00:13:13.446846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.624  [2024-12-10 00:13:13.446862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.624  [2024-12-10 00:13:13.446869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.624  [2024-12-10 00:13:13.447028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.624  [2024-12-10 00:13:13.447207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.624  [2024-12-10 00:13:13.447215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.624  [2024-12-10 00:13:13.447222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.624  [2024-12-10 00:13:13.447228] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.624  [2024-12-10 00:13:13.459188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.624  [2024-12-10 00:13:13.459524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.624  [2024-12-10 00:13:13.459540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.624  [2024-12-10 00:13:13.459547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.624  [2024-12-10 00:13:13.459705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.624  [2024-12-10 00:13:13.459864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.624  [2024-12-10 00:13:13.459871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.624  [2024-12-10 00:13:13.459877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.624  [2024-12-10 00:13:13.459883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.624  [2024-12-10 00:13:13.472027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.624  [2024-12-10 00:13:13.472475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.624  [2024-12-10 00:13:13.472491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.624  [2024-12-10 00:13:13.472498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.624  [2024-12-10 00:13:13.472666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.624  [2024-12-10 00:13:13.472858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.624  [2024-12-10 00:13:13.472866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.624  [2024-12-10 00:13:13.472872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.624  [2024-12-10 00:13:13.472878] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.884  [2024-12-10 00:13:13.485114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.884  [2024-12-10 00:13:13.485553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.884  [2024-12-10 00:13:13.485598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.884  [2024-12-10 00:13:13.485630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.884  [2024-12-10 00:13:13.486225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.884  [2024-12-10 00:13:13.486736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.884  [2024-12-10 00:13:13.486744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.884  [2024-12-10 00:13:13.486751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.884  [2024-12-10 00:13:13.486757] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.884  [2024-12-10 00:13:13.498146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.884  [2024-12-10 00:13:13.498603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.884  [2024-12-10 00:13:13.498649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.884  [2024-12-10 00:13:13.498672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.884  [2024-12-10 00:13:13.499268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.884  [2024-12-10 00:13:13.499773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.884  [2024-12-10 00:13:13.499781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.884  [2024-12-10 00:13:13.499787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.884  [2024-12-10 00:13:13.499793] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.884  [2024-12-10 00:13:13.511040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.884  [2024-12-10 00:13:13.511497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.884  [2024-12-10 00:13:13.511542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.884  [2024-12-10 00:13:13.511565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.884  [2024-12-10 00:13:13.512146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.884  [2024-12-10 00:13:13.512714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.884  [2024-12-10 00:13:13.512723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.884  [2024-12-10 00:13:13.512729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.884  [2024-12-10 00:13:13.512735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.884  [2024-12-10 00:13:13.526532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.884  [2024-12-10 00:13:13.527065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.884  [2024-12-10 00:13:13.527088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.884  [2024-12-10 00:13:13.527099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.884  [2024-12-10 00:13:13.527361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.884  [2024-12-10 00:13:13.527622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.884  [2024-12-10 00:13:13.527634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.884  [2024-12-10 00:13:13.527643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.884  [2024-12-10 00:13:13.527652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.884  [2024-12-10 00:13:13.539460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.884  [2024-12-10 00:13:13.539801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.884  [2024-12-10 00:13:13.539818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.884  [2024-12-10 00:13:13.539825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.885  [2024-12-10 00:13:13.539994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.885  [2024-12-10 00:13:13.540162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.885  [2024-12-10 00:13:13.540176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.885  [2024-12-10 00:13:13.540183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.885  [2024-12-10 00:13:13.540189] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.885  [2024-12-10 00:13:13.552323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.885  [2024-12-10 00:13:13.552718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.885  [2024-12-10 00:13:13.552734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.885  [2024-12-10 00:13:13.552740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.885  [2024-12-10 00:13:13.552898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.885  [2024-12-10 00:13:13.553057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.885  [2024-12-10 00:13:13.553065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.885  [2024-12-10 00:13:13.553071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.885  [2024-12-10 00:13:13.553077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.885  [2024-12-10 00:13:13.565122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.885  [2024-12-10 00:13:13.565541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.885  [2024-12-10 00:13:13.565592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.885  [2024-12-10 00:13:13.565615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.885  [2024-12-10 00:13:13.566213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.885  [2024-12-10 00:13:13.566800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.885  [2024-12-10 00:13:13.566825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.885  [2024-12-10 00:13:13.566846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.885  [2024-12-10 00:13:13.566872] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.885  [2024-12-10 00:13:13.578014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.885  [2024-12-10 00:13:13.578460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.885  [2024-12-10 00:13:13.578477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.885  [2024-12-10 00:13:13.578484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.885  [2024-12-10 00:13:13.578652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.885  [2024-12-10 00:13:13.578821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.885  [2024-12-10 00:13:13.578829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.885  [2024-12-10 00:13:13.578835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.885  [2024-12-10 00:13:13.578841] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.885  [2024-12-10 00:13:13.590840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.885  [2024-12-10 00:13:13.591266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.885  [2024-12-10 00:13:13.591311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.885  [2024-12-10 00:13:13.591335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.885  [2024-12-10 00:13:13.591917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.885  [2024-12-10 00:13:13.592097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.885  [2024-12-10 00:13:13.592105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.885  [2024-12-10 00:13:13.592111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.885  [2024-12-10 00:13:13.592116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.885  [2024-12-10 00:13:13.603762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.885  [2024-12-10 00:13:13.604159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.885  [2024-12-10 00:13:13.604180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.885  [2024-12-10 00:13:13.604187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.885  [2024-12-10 00:13:13.604371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.885  [2024-12-10 00:13:13.604542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.885  [2024-12-10 00:13:13.604550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.885  [2024-12-10 00:13:13.604556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.885  [2024-12-10 00:13:13.604562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.885  [2024-12-10 00:13:13.616566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.885  [2024-12-10 00:13:13.616966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.885  [2024-12-10 00:13:13.616981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.885  [2024-12-10 00:13:13.616988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.885  [2024-12-10 00:13:13.617148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.885  [2024-12-10 00:13:13.617334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.885  [2024-12-10 00:13:13.617343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.885  [2024-12-10 00:13:13.617349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.885  [2024-12-10 00:13:13.617355] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.885  [2024-12-10 00:13:13.629424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.885  [2024-12-10 00:13:13.629818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.885  [2024-12-10 00:13:13.629834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.885  [2024-12-10 00:13:13.629840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.885  [2024-12-10 00:13:13.629999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.885  [2024-12-10 00:13:13.630157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.885  [2024-12-10 00:13:13.630164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.885  [2024-12-10 00:13:13.630184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.885  [2024-12-10 00:13:13.630190] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.885  [2024-12-10 00:13:13.642194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.885  [2024-12-10 00:13:13.642577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.885  [2024-12-10 00:13:13.642623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.885  [2024-12-10 00:13:13.642646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.885  [2024-12-10 00:13:13.643116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.885  [2024-12-10 00:13:13.643302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.886  [2024-12-10 00:13:13.643311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.886  [2024-12-10 00:13:13.643317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.886  [2024-12-10 00:13:13.643323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.886  [2024-12-10 00:13:13.655338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.886  [2024-12-10 00:13:13.655744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.886  [2024-12-10 00:13:13.655760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.886  [2024-12-10 00:13:13.655767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.886  [2024-12-10 00:13:13.655939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.886  [2024-12-10 00:13:13.656107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.886  [2024-12-10 00:13:13.656115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.886  [2024-12-10 00:13:13.656121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.886  [2024-12-10 00:13:13.656127] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.886  [2024-12-10 00:13:13.668226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.886  [2024-12-10 00:13:13.668579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.886  [2024-12-10 00:13:13.668595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.886  [2024-12-10 00:13:13.668602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.886  [2024-12-10 00:13:13.668770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.886  [2024-12-10 00:13:13.668937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.886  [2024-12-10 00:13:13.668945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.886  [2024-12-10 00:13:13.668951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.886  [2024-12-10 00:13:13.668957] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.886  [2024-12-10 00:13:13.681113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.886  [2024-12-10 00:13:13.681510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.886  [2024-12-10 00:13:13.681527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.886  [2024-12-10 00:13:13.681534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.886  [2024-12-10 00:13:13.681692] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.886  [2024-12-10 00:13:13.681851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.886  [2024-12-10 00:13:13.681858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.886  [2024-12-10 00:13:13.681864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.886  [2024-12-10 00:13:13.681870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.886  [2024-12-10 00:13:13.693884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.886  [2024-12-10 00:13:13.694303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.886  [2024-12-10 00:13:13.694319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.886  [2024-12-10 00:13:13.694325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.886  [2024-12-10 00:13:13.694484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.886  [2024-12-10 00:13:13.694643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.886  [2024-12-10 00:13:13.694653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.886  [2024-12-10 00:13:13.694659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.886  [2024-12-10 00:13:13.694665] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.886  [2024-12-10 00:13:13.706748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.886  [2024-12-10 00:13:13.707143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.886  [2024-12-10 00:13:13.707159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.886  [2024-12-10 00:13:13.707171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.886  [2024-12-10 00:13:13.707354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.886  [2024-12-10 00:13:13.707522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.886  [2024-12-10 00:13:13.707530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.886  [2024-12-10 00:13:13.707536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.886  [2024-12-10 00:13:13.707542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.886  [2024-12-10 00:13:13.719583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.886  [2024-12-10 00:13:13.720007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.886  [2024-12-10 00:13:13.720023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.886  [2024-12-10 00:13:13.720029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.886  [2024-12-10 00:13:13.720209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.886  [2024-12-10 00:13:13.720381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.886  [2024-12-10 00:13:13.720389] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.886  [2024-12-10 00:13:13.720396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.886  [2024-12-10 00:13:13.720402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:57.886  [2024-12-10 00:13:13.732411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:57.886  [2024-12-10 00:13:13.732744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:57.886  [2024-12-10 00:13:13.732760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:57.886  [2024-12-10 00:13:13.732768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:57.886  [2024-12-10 00:13:13.732936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:57.886  [2024-12-10 00:13:13.733107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:57.886  [2024-12-10 00:13:13.733114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:57.886  [2024-12-10 00:13:13.733120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:57.886  [2024-12-10 00:13:13.733129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.146  [2024-12-10 00:13:13.745245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.146  [2024-12-10 00:13:13.745692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.146  [2024-12-10 00:13:13.745741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.146  [2024-12-10 00:13:13.745764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.146  [2024-12-10 00:13:13.746374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.146  [2024-12-10 00:13:13.746548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.146  [2024-12-10 00:13:13.746556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.146  [2024-12-10 00:13:13.746563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.146  [2024-12-10 00:13:13.746569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.146  [2024-12-10 00:13:13.758098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.146  [2024-12-10 00:13:13.758532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.146  [2024-12-10 00:13:13.758549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.146  [2024-12-10 00:13:13.758556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.146  [2024-12-10 00:13:13.758723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.146  [2024-12-10 00:13:13.758893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.146  [2024-12-10 00:13:13.758900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.146  [2024-12-10 00:13:13.758907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.146  [2024-12-10 00:13:13.758913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.146  [2024-12-10 00:13:13.770980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.146  [2024-12-10 00:13:13.771410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.146  [2024-12-10 00:13:13.771426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.146  [2024-12-10 00:13:13.771433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.147  [2024-12-10 00:13:13.771601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.147  [2024-12-10 00:13:13.771770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.147  [2024-12-10 00:13:13.771778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.147  [2024-12-10 00:13:13.771784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.147  [2024-12-10 00:13:13.771791] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.147  [2024-12-10 00:13:13.783829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.147  [2024-12-10 00:13:13.784174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.147  [2024-12-10 00:13:13.784190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.147  [2024-12-10 00:13:13.784197] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.147  [2024-12-10 00:13:13.784356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.147  [2024-12-10 00:13:13.784514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.147  [2024-12-10 00:13:13.784522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.147  [2024-12-10 00:13:13.784528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.147  [2024-12-10 00:13:13.784533] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.147  [2024-12-10 00:13:13.796693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.147  [2024-12-10 00:13:13.797054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.147  [2024-12-10 00:13:13.797071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.147  [2024-12-10 00:13:13.797078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.147  [2024-12-10 00:13:13.797251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.147  [2024-12-10 00:13:13.797420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.147  [2024-12-10 00:13:13.797428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.147  [2024-12-10 00:13:13.797435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.147  [2024-12-10 00:13:13.797441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.147  [2024-12-10 00:13:13.809557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.147  [2024-12-10 00:13:13.809990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.147  [2024-12-10 00:13:13.810007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.147  [2024-12-10 00:13:13.810014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.147  [2024-12-10 00:13:13.810186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.147  [2024-12-10 00:13:13.810355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.147  [2024-12-10 00:13:13.810363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.147  [2024-12-10 00:13:13.810369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.147  [2024-12-10 00:13:13.810376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.147  [2024-12-10 00:13:13.822402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.147  [2024-12-10 00:13:13.822837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.147  [2024-12-10 00:13:13.822881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.147  [2024-12-10 00:13:13.822904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.147  [2024-12-10 00:13:13.823327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.147  [2024-12-10 00:13:13.823496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.147  [2024-12-10 00:13:13.823504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.147  [2024-12-10 00:13:13.823511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.147  [2024-12-10 00:13:13.823517] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.147  [2024-12-10 00:13:13.835424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.147  [2024-12-10 00:13:13.835846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.147  [2024-12-10 00:13:13.835863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.147  [2024-12-10 00:13:13.835869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.147  [2024-12-10 00:13:13.836036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.147  [2024-12-10 00:13:13.836209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.147  [2024-12-10 00:13:13.836217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.147  [2024-12-10 00:13:13.836224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.147  [2024-12-10 00:13:13.836230] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.147  [2024-12-10 00:13:13.848336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.147  [2024-12-10 00:13:13.848750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.147  [2024-12-10 00:13:13.848803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.147  [2024-12-10 00:13:13.848826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.147  [2024-12-10 00:13:13.849367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.147  [2024-12-10 00:13:13.849552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.147  [2024-12-10 00:13:13.849560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.147  [2024-12-10 00:13:13.849566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.147  [2024-12-10 00:13:13.849572] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.147  [2024-12-10 00:13:13.861133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.147  [2024-12-10 00:13:13.861538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.147  [2024-12-10 00:13:13.861554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.147  [2024-12-10 00:13:13.861561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.147  [2024-12-10 00:13:13.861720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.147  [2024-12-10 00:13:13.861878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.147  [2024-12-10 00:13:13.861889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.147  [2024-12-10 00:13:13.861895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.147  [2024-12-10 00:13:13.861900] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.147  [2024-12-10 00:13:13.874001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.147  [2024-12-10 00:13:13.874444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.147  [2024-12-10 00:13:13.874461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.147  [2024-12-10 00:13:13.874468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.147  [2024-12-10 00:13:13.874635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.147  [2024-12-10 00:13:13.874803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.148  [2024-12-10 00:13:13.874811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.148  [2024-12-10 00:13:13.874817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.148  [2024-12-10 00:13:13.874823] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.148  [2024-12-10 00:13:13.886859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.148  [2024-12-10 00:13:13.887271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.148  [2024-12-10 00:13:13.887317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.148  [2024-12-10 00:13:13.887340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.148  [2024-12-10 00:13:13.887575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.148  [2024-12-10 00:13:13.887744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.148  [2024-12-10 00:13:13.887752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.148  [2024-12-10 00:13:13.887759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.148  [2024-12-10 00:13:13.887765] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.148  [2024-12-10 00:13:13.899651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.148  [2024-12-10 00:13:13.899994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.148  [2024-12-10 00:13:13.900010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.148  [2024-12-10 00:13:13.900017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.148  [2024-12-10 00:13:13.900190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.148  [2024-12-10 00:13:13.900359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.148  [2024-12-10 00:13:13.900367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.148  [2024-12-10 00:13:13.900373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.148  [2024-12-10 00:13:13.900384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.148  [2024-12-10 00:13:13.912759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.148  [2024-12-10 00:13:13.913163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.148  [2024-12-10 00:13:13.913183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.148  [2024-12-10 00:13:13.913190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.148  [2024-12-10 00:13:13.913363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.148  [2024-12-10 00:13:13.913543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.148  [2024-12-10 00:13:13.913551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.148  [2024-12-10 00:13:13.913558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.148  [2024-12-10 00:13:13.913564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.148  [2024-12-10 00:13:13.925569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.148  [2024-12-10 00:13:13.925947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.148  [2024-12-10 00:13:13.925991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.148  [2024-12-10 00:13:13.926015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.148  [2024-12-10 00:13:13.926482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.148  [2024-12-10 00:13:13.926651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.148  [2024-12-10 00:13:13.926678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.148  [2024-12-10 00:13:13.926693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.148  [2024-12-10 00:13:13.926706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.148  [2024-12-10 00:13:13.940321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.148  [2024-12-10 00:13:13.940838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.148  [2024-12-10 00:13:13.940886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.148  [2024-12-10 00:13:13.940909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.148  [2024-12-10 00:13:13.941508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.148  [2024-12-10 00:13:13.941764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.148  [2024-12-10 00:13:13.941776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.148  [2024-12-10 00:13:13.941785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.148  [2024-12-10 00:13:13.941794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.148  [2024-12-10 00:13:13.953307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.148  [2024-12-10 00:13:13.953736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.148  [2024-12-10 00:13:13.953756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.148  [2024-12-10 00:13:13.953763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.148  [2024-12-10 00:13:13.953931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.148  [2024-12-10 00:13:13.954099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.148  [2024-12-10 00:13:13.954106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.148  [2024-12-10 00:13:13.954113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.148  [2024-12-10 00:13:13.954118] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.148  [2024-12-10 00:13:13.966121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.148  [2024-12-10 00:13:13.966476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.148  [2024-12-10 00:13:13.966492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.148  [2024-12-10 00:13:13.966499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.148  [2024-12-10 00:13:13.966657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.148  [2024-12-10 00:13:13.966822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.149  [2024-12-10 00:13:13.966830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.149  [2024-12-10 00:13:13.966836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.149  [2024-12-10 00:13:13.966841] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.149  [2024-12-10 00:13:13.978892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.149  [2024-12-10 00:13:13.979305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.149  [2024-12-10 00:13:13.979320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.149  [2024-12-10 00:13:13.979327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.149  [2024-12-10 00:13:13.979486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.149  [2024-12-10 00:13:13.979645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.149  [2024-12-10 00:13:13.979652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.149  [2024-12-10 00:13:13.979658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.149  [2024-12-10 00:13:13.979664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.149  [2024-12-10 00:13:13.991755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.149  [2024-12-10 00:13:13.992173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.149  [2024-12-10 00:13:13.992189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.149  [2024-12-10 00:13:13.992212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.149  [2024-12-10 00:13:13.992383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.149  [2024-12-10 00:13:13.992554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.149  [2024-12-10 00:13:13.992562] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.149  [2024-12-10 00:13:13.992568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.149  [2024-12-10 00:13:13.992574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.409  [2024-12-10 00:13:14.004880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.409  [2024-12-10 00:13:14.005305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.409  [2024-12-10 00:13:14.005321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.409  [2024-12-10 00:13:14.005328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.409  [2024-12-10 00:13:14.005496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.409  [2024-12-10 00:13:14.005663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.409  [2024-12-10 00:13:14.005671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.409  [2024-12-10 00:13:14.005677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.409  [2024-12-10 00:13:14.005683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.409  [2024-12-10 00:13:14.017665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.409  [2024-12-10 00:13:14.018079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.409  [2024-12-10 00:13:14.018094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.409  [2024-12-10 00:13:14.018101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.409  [2024-12-10 00:13:14.018284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.409  [2024-12-10 00:13:14.018452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.409  [2024-12-10 00:13:14.018460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.409  [2024-12-10 00:13:14.018466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.409  [2024-12-10 00:13:14.018472] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.409  [2024-12-10 00:13:14.030509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.409  [2024-12-10 00:13:14.030858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.409  [2024-12-10 00:13:14.030873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.409  [2024-12-10 00:13:14.030880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.409  [2024-12-10 00:13:14.031038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.409  [2024-12-10 00:13:14.031223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.409  [2024-12-10 00:13:14.031234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.409  [2024-12-10 00:13:14.031241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.409  [2024-12-10 00:13:14.031247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.409  [2024-12-10 00:13:14.043305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.409  [2024-12-10 00:13:14.043706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.409  [2024-12-10 00:13:14.043721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.409  [2024-12-10 00:13:14.043728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.409  [2024-12-10 00:13:14.043886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.409  [2024-12-10 00:13:14.044045] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.409  [2024-12-10 00:13:14.044053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.409  [2024-12-10 00:13:14.044059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.409  [2024-12-10 00:13:14.044065] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.409  [2024-12-10 00:13:14.056036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.409  [2024-12-10 00:13:14.056474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.409  [2024-12-10 00:13:14.056491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.409  [2024-12-10 00:13:14.056498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.409  [2024-12-10 00:13:14.056666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.409  [2024-12-10 00:13:14.056834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.409  [2024-12-10 00:13:14.056841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.409  [2024-12-10 00:13:14.056847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.409  [2024-12-10 00:13:14.056853] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.410  [2024-12-10 00:13:14.068847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.410  [2024-12-10 00:13:14.069266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.410  [2024-12-10 00:13:14.069311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.410  [2024-12-10 00:13:14.069334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.410  [2024-12-10 00:13:14.069857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.410  [2024-12-10 00:13:14.070017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.410  [2024-12-10 00:13:14.070024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.410  [2024-12-10 00:13:14.070030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.410  [2024-12-10 00:13:14.070036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.410  [2024-12-10 00:13:14.081712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.410  [2024-12-10 00:13:14.082124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.410  [2024-12-10 00:13:14.082181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.410  [2024-12-10 00:13:14.082206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.410  [2024-12-10 00:13:14.082800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.410  [2024-12-10 00:13:14.082969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.410  [2024-12-10 00:13:14.082976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.410  [2024-12-10 00:13:14.082984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.410  [2024-12-10 00:13:14.082990] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.410  [2024-12-10 00:13:14.094522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.410  [2024-12-10 00:13:14.094977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.410  [2024-12-10 00:13:14.094992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.410  [2024-12-10 00:13:14.094998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.410  [2024-12-10 00:13:14.095171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.410  [2024-12-10 00:13:14.095359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.410  [2024-12-10 00:13:14.095367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.410  [2024-12-10 00:13:14.095374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.410  [2024-12-10 00:13:14.095380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.410  [2024-12-10 00:13:14.107328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.410  [2024-12-10 00:13:14.107723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.410  [2024-12-10 00:13:14.107767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.410  [2024-12-10 00:13:14.107790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.410  [2024-12-10 00:13:14.108254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.410  [2024-12-10 00:13:14.108423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.410  [2024-12-10 00:13:14.108431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.410  [2024-12-10 00:13:14.108437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.410  [2024-12-10 00:13:14.108443] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.410  [2024-12-10 00:13:14.120217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.410  [2024-12-10 00:13:14.120623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.410  [2024-12-10 00:13:14.120642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.410  [2024-12-10 00:13:14.120649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.410  [2024-12-10 00:13:14.120817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.410  [2024-12-10 00:13:14.120985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.410  [2024-12-10 00:13:14.120993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.410  [2024-12-10 00:13:14.121000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.410  [2024-12-10 00:13:14.121006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.410  [2024-12-10 00:13:14.133088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.410  [2024-12-10 00:13:14.133618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.410  [2024-12-10 00:13:14.133637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.410  [2024-12-10 00:13:14.133644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.410  [2024-12-10 00:13:14.133813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.410  [2024-12-10 00:13:14.133983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.410  [2024-12-10 00:13:14.133991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.410  [2024-12-10 00:13:14.133997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.410  [2024-12-10 00:13:14.134003] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.410  [2024-12-10 00:13:14.145832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.410  [2024-12-10 00:13:14.146252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.410  [2024-12-10 00:13:14.146269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.410  [2024-12-10 00:13:14.146276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.410  [2024-12-10 00:13:14.146444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.410  [2024-12-10 00:13:14.146613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.410  [2024-12-10 00:13:14.146621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.410  [2024-12-10 00:13:14.146627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.410  [2024-12-10 00:13:14.146633] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.410  [2024-12-10 00:13:14.158667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.410  [2024-12-10 00:13:14.159081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.410  [2024-12-10 00:13:14.159098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.410  [2024-12-10 00:13:14.159104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.410  [2024-12-10 00:13:14.159292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.410  [2024-12-10 00:13:14.159461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.410  [2024-12-10 00:13:14.159470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.410  [2024-12-10 00:13:14.159477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.410  [2024-12-10 00:13:14.159484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.410  [2024-12-10 00:13:14.171745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.410  [2024-12-10 00:13:14.172192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.411  [2024-12-10 00:13:14.172237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.411  [2024-12-10 00:13:14.172260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.411  [2024-12-10 00:13:14.172842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.411  [2024-12-10 00:13:14.173450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.411  [2024-12-10 00:13:14.173458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.411  [2024-12-10 00:13:14.173465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.411  [2024-12-10 00:13:14.173471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.411  [2024-12-10 00:13:14.184515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.411  [2024-12-10 00:13:14.184935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.411  [2024-12-10 00:13:14.184979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.411  [2024-12-10 00:13:14.185002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.411  [2024-12-10 00:13:14.185479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.411  [2024-12-10 00:13:14.185639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.411  [2024-12-10 00:13:14.185646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.411  [2024-12-10 00:13:14.185652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.411  [2024-12-10 00:13:14.185658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.411  [2024-12-10 00:13:14.197270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.411  [2024-12-10 00:13:14.197686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.411  [2024-12-10 00:13:14.197702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.411  [2024-12-10 00:13:14.197709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.411  [2024-12-10 00:13:14.197876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.411  [2024-12-10 00:13:14.198044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.411  [2024-12-10 00:13:14.198052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.411  [2024-12-10 00:13:14.198062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.411  [2024-12-10 00:13:14.198068] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.411  [2024-12-10 00:13:14.210264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.411  [2024-12-10 00:13:14.210690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.411  [2024-12-10 00:13:14.210707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.411  [2024-12-10 00:13:14.210714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.411  [2024-12-10 00:13:14.210887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.411  [2024-12-10 00:13:14.211061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.411  [2024-12-10 00:13:14.211070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.411  [2024-12-10 00:13:14.211076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.411  [2024-12-10 00:13:14.211082] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.411  [2024-12-10 00:13:14.223052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.411  [2024-12-10 00:13:14.223473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.411  [2024-12-10 00:13:14.223490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.411  [2024-12-10 00:13:14.223497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.411  [2024-12-10 00:13:14.223664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.411  [2024-12-10 00:13:14.223834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.411  [2024-12-10 00:13:14.223842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.411  [2024-12-10 00:13:14.223848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.411  [2024-12-10 00:13:14.223854] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.411  [2024-12-10 00:13:14.235914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.411  [2024-12-10 00:13:14.236307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.411  [2024-12-10 00:13:14.236324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.411  [2024-12-10 00:13:14.236331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.411  [2024-12-10 00:13:14.236489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.411  [2024-12-10 00:13:14.236648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.411  [2024-12-10 00:13:14.236656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.411  [2024-12-10 00:13:14.236662] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.411  [2024-12-10 00:13:14.236667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.411  [2024-12-10 00:13:14.248675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.411  [2024-12-10 00:13:14.249054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.411  [2024-12-10 00:13:14.249071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.411  [2024-12-10 00:13:14.249078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.411  [2024-12-10 00:13:14.249269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.411  [2024-12-10 00:13:14.249443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.411  [2024-12-10 00:13:14.249451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.411  [2024-12-10 00:13:14.249458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.411  [2024-12-10 00:13:14.249464] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.411  [2024-12-10 00:13:14.261738] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.411  [2024-12-10 00:13:14.262192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.411  [2024-12-10 00:13:14.262237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.411  [2024-12-10 00:13:14.262261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.411  [2024-12-10 00:13:14.262512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.411  [2024-12-10 00:13:14.262712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.411  [2024-12-10 00:13:14.262723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.411  [2024-12-10 00:13:14.262730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.411  [2024-12-10 00:13:14.262736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.671  [2024-12-10 00:13:14.274639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.671  [2024-12-10 00:13:14.275071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.671  [2024-12-10 00:13:14.275087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.671  [2024-12-10 00:13:14.275094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.671  [2024-12-10 00:13:14.275267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.671  [2024-12-10 00:13:14.275435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.671  [2024-12-10 00:13:14.275444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.671  [2024-12-10 00:13:14.275450] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.671  [2024-12-10 00:13:14.275457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.671  [2024-12-10 00:13:14.287559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.671  [2024-12-10 00:13:14.287990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.671  [2024-12-10 00:13:14.288006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.671  [2024-12-10 00:13:14.288016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.671  [2024-12-10 00:13:14.288189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.671  [2024-12-10 00:13:14.288358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.671  [2024-12-10 00:13:14.288366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.671  [2024-12-10 00:13:14.288372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.671  [2024-12-10 00:13:14.288378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.671  [2024-12-10 00:13:14.300688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.671  [2024-12-10 00:13:14.301052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.671  [2024-12-10 00:13:14.301069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.671  [2024-12-10 00:13:14.301076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.671  [2024-12-10 00:13:14.301253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.671  [2024-12-10 00:13:14.301427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.671  [2024-12-10 00:13:14.301435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.671  [2024-12-10 00:13:14.301441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.672  [2024-12-10 00:13:14.301448] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.672  [2024-12-10 00:13:14.313495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.672  [2024-12-10 00:13:14.313865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.672  [2024-12-10 00:13:14.313909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.672  [2024-12-10 00:13:14.313932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.672  [2024-12-10 00:13:14.314488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.672  [2024-12-10 00:13:14.314657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.672  [2024-12-10 00:13:14.314665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.672  [2024-12-10 00:13:14.314671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.672  [2024-12-10 00:13:14.314677] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.672  [2024-12-10 00:13:14.326359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.672  [2024-12-10 00:13:14.326665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.672  [2024-12-10 00:13:14.326681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.672  [2024-12-10 00:13:14.326688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.672  [2024-12-10 00:13:14.326856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.672  [2024-12-10 00:13:14.327027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.672  [2024-12-10 00:13:14.327036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.672  [2024-12-10 00:13:14.327042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.672  [2024-12-10 00:13:14.327048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.672  [2024-12-10 00:13:14.339297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.672  [2024-12-10 00:13:14.339647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.672  [2024-12-10 00:13:14.339663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.672  [2024-12-10 00:13:14.339670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.672  [2024-12-10 00:13:14.339838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.672  [2024-12-10 00:13:14.340006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.672  [2024-12-10 00:13:14.340014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.672  [2024-12-10 00:13:14.340020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.672  [2024-12-10 00:13:14.340026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.672  [2024-12-10 00:13:14.352224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.672  [2024-12-10 00:13:14.352566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.672  [2024-12-10 00:13:14.352609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.672  [2024-12-10 00:13:14.352632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.672  [2024-12-10 00:13:14.353227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.672  [2024-12-10 00:13:14.353813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.672  [2024-12-10 00:13:14.353822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.672  [2024-12-10 00:13:14.353829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.672  [2024-12-10 00:13:14.353835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.672  [2024-12-10 00:13:14.365211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.672  [2024-12-10 00:13:14.365553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.672  [2024-12-10 00:13:14.365569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.672  [2024-12-10 00:13:14.365576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.672  [2024-12-10 00:13:14.365743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.672  [2024-12-10 00:13:14.365912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.672  [2024-12-10 00:13:14.365920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.672  [2024-12-10 00:13:14.365929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.672  [2024-12-10 00:13:14.365936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.672  [2024-12-10 00:13:14.378095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.672  [2024-12-10 00:13:14.378449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.672  [2024-12-10 00:13:14.378466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.672  [2024-12-10 00:13:14.378473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.672  [2024-12-10 00:13:14.378641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.672  [2024-12-10 00:13:14.378810] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.672  [2024-12-10 00:13:14.378818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.672  [2024-12-10 00:13:14.378824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.672  [2024-12-10 00:13:14.378829] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.672  [2024-12-10 00:13:14.391083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.672  [2024-12-10 00:13:14.391490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.672  [2024-12-10 00:13:14.391506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.672  [2024-12-10 00:13:14.391513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.672  [2024-12-10 00:13:14.391680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.672  [2024-12-10 00:13:14.391849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.672  [2024-12-10 00:13:14.391857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.672  [2024-12-10 00:13:14.391864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.672  [2024-12-10 00:13:14.391870] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.672  [2024-12-10 00:13:14.404040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.672  [2024-12-10 00:13:14.404431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.672  [2024-12-10 00:13:14.404477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.672  [2024-12-10 00:13:14.404500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.672  [2024-12-10 00:13:14.404957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.672  [2024-12-10 00:13:14.405126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.672  [2024-12-10 00:13:14.405134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.672  [2024-12-10 00:13:14.405141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.673  [2024-12-10 00:13:14.405147] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.673  [2024-12-10 00:13:14.416940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.673  [2024-12-10 00:13:14.417312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.673  [2024-12-10 00:13:14.417330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.673  [2024-12-10 00:13:14.417337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.673  [2024-12-10 00:13:14.417497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.673  [2024-12-10 00:13:14.417658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.673  [2024-12-10 00:13:14.417667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.673  [2024-12-10 00:13:14.417674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.673  [2024-12-10 00:13:14.417681] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.673  [2024-12-10 00:13:14.429948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.673  [2024-12-10 00:13:14.430331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.673  [2024-12-10 00:13:14.430349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.673  [2024-12-10 00:13:14.430357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.673  [2024-12-10 00:13:14.430516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.673  [2024-12-10 00:13:14.430676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.673  [2024-12-10 00:13:14.430686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.673  [2024-12-10 00:13:14.430692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.673  [2024-12-10 00:13:14.430699] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.673  [2024-12-10 00:13:14.442886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.673  [2024-12-10 00:13:14.443355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.673  [2024-12-10 00:13:14.443400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.673  [2024-12-10 00:13:14.443424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.673  [2024-12-10 00:13:14.443978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.673  [2024-12-10 00:13:14.444140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.673  [2024-12-10 00:13:14.444150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.673  [2024-12-10 00:13:14.444157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.673  [2024-12-10 00:13:14.444163] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.673       6022.20 IOPS,    23.52 MiB/s
[2024-12-09T23:13:14.530Z] [2024-12-10 00:13:14.455805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.673  [2024-12-10 00:13:14.456208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.673  [2024-12-10 00:13:14.456227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.673  [2024-12-10 00:13:14.456240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.673  [2024-12-10 00:13:14.456399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.673  [2024-12-10 00:13:14.456560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.673  [2024-12-10 00:13:14.456569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.673  [2024-12-10 00:13:14.456575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.673  [2024-12-10 00:13:14.456581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.673  [2024-12-10 00:13:14.468737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.673  [2024-12-10 00:13:14.469191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.673  [2024-12-10 00:13:14.469236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.673  [2024-12-10 00:13:14.469260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.673  [2024-12-10 00:13:14.469843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.673  [2024-12-10 00:13:14.470259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.673  [2024-12-10 00:13:14.470269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.673  [2024-12-10 00:13:14.470276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.673  [2024-12-10 00:13:14.470283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.673  [2024-12-10 00:13:14.481582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.673  [2024-12-10 00:13:14.481991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.673  [2024-12-10 00:13:14.482009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.673  [2024-12-10 00:13:14.482017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.673  [2024-12-10 00:13:14.482192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.673  [2024-12-10 00:13:14.482361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.673  [2024-12-10 00:13:14.482371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.673  [2024-12-10 00:13:14.482377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.673  [2024-12-10 00:13:14.482384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.673  [2024-12-10 00:13:14.494558] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.673  [2024-12-10 00:13:14.494991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.673  [2024-12-10 00:13:14.495009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.673  [2024-12-10 00:13:14.495017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.673  [2024-12-10 00:13:14.495189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.673  [2024-12-10 00:13:14.495364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.673  [2024-12-10 00:13:14.495373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.673  [2024-12-10 00:13:14.495379] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.673  [2024-12-10 00:13:14.495385] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.673  [2024-12-10 00:13:14.507499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.673  [2024-12-10 00:13:14.507926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.673  [2024-12-10 00:13:14.507972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.673  [2024-12-10 00:13:14.507995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.673  [2024-12-10 00:13:14.508590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.673  [2024-12-10 00:13:14.509188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.673  [2024-12-10 00:13:14.509214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.673  [2024-12-10 00:13:14.509236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.673  [2024-12-10 00:13:14.509266] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.674  [2024-12-10 00:13:14.520511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.674  [2024-12-10 00:13:14.520920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.674  [2024-12-10 00:13:14.520964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.674  [2024-12-10 00:13:14.520988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.674  [2024-12-10 00:13:14.521472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.674  [2024-12-10 00:13:14.521635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.674  [2024-12-10 00:13:14.521644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.674  [2024-12-10 00:13:14.521650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.674  [2024-12-10 00:13:14.521657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.933  [2024-12-10 00:13:14.533524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.933  [2024-12-10 00:13:14.533994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.934  [2024-12-10 00:13:14.534012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.934  [2024-12-10 00:13:14.534021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.934  [2024-12-10 00:13:14.534199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.934  [2024-12-10 00:13:14.534375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.934  [2024-12-10 00:13:14.534384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.934  [2024-12-10 00:13:14.534395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.934  [2024-12-10 00:13:14.534403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.934  [2024-12-10 00:13:14.546460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.934  [2024-12-10 00:13:14.546881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.934  [2024-12-10 00:13:14.546899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.934  [2024-12-10 00:13:14.546907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.934  [2024-12-10 00:13:14.547066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.934  [2024-12-10 00:13:14.547233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.934  [2024-12-10 00:13:14.547243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.934  [2024-12-10 00:13:14.547250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.934  [2024-12-10 00:13:14.547257] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.934  [2024-12-10 00:13:14.559412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.934  [2024-12-10 00:13:14.559715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.934  [2024-12-10 00:13:14.559760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.934  [2024-12-10 00:13:14.559784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.934  [2024-12-10 00:13:14.560260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.934  [2024-12-10 00:13:14.560423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.934  [2024-12-10 00:13:14.560433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.934  [2024-12-10 00:13:14.560439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.934  [2024-12-10 00:13:14.560446] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.934  [2024-12-10 00:13:14.572285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.934  [2024-12-10 00:13:14.572620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.934  [2024-12-10 00:13:14.572638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.934  [2024-12-10 00:13:14.572646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.934  [2024-12-10 00:13:14.572804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.934  [2024-12-10 00:13:14.572964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.934  [2024-12-10 00:13:14.572974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.934  [2024-12-10 00:13:14.572981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.934  [2024-12-10 00:13:14.572987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.934  [2024-12-10 00:13:14.585101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.934  [2024-12-10 00:13:14.585511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.934  [2024-12-10 00:13:14.585529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.934  [2024-12-10 00:13:14.585536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.934  [2024-12-10 00:13:14.585695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.934  [2024-12-10 00:13:14.585855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.934  [2024-12-10 00:13:14.585864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.934  [2024-12-10 00:13:14.585871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.934  [2024-12-10 00:13:14.585877] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.934  [2024-12-10 00:13:14.598010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.934  [2024-12-10 00:13:14.598386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.934  [2024-12-10 00:13:14.598404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.934  [2024-12-10 00:13:14.598412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.934  [2024-12-10 00:13:14.598580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.934  [2024-12-10 00:13:14.598749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.934  [2024-12-10 00:13:14.598759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.934  [2024-12-10 00:13:14.598766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.934  [2024-12-10 00:13:14.598772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.934  [2024-12-10 00:13:14.610950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.934  [2024-12-10 00:13:14.611329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.934  [2024-12-10 00:13:14.611347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.934  [2024-12-10 00:13:14.611355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.934  [2024-12-10 00:13:14.611514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.934  [2024-12-10 00:13:14.611674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.934  [2024-12-10 00:13:14.611683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.934  [2024-12-10 00:13:14.611689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.934  [2024-12-10 00:13:14.611695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.934  [2024-12-10 00:13:14.623795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.934  [2024-12-10 00:13:14.624141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.934  [2024-12-10 00:13:14.624159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.934  [2024-12-10 00:13:14.624173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.934  [2024-12-10 00:13:14.624333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.934  [2024-12-10 00:13:14.624493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.934  [2024-12-10 00:13:14.624503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.934  [2024-12-10 00:13:14.624509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.934  [2024-12-10 00:13:14.624516] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.934  [2024-12-10 00:13:14.636756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.934  [2024-12-10 00:13:14.637100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.934  [2024-12-10 00:13:14.637117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.934  [2024-12-10 00:13:14.637124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.934  [2024-12-10 00:13:14.637288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.934  [2024-12-10 00:13:14.637449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.934  [2024-12-10 00:13:14.637458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.934  [2024-12-10 00:13:14.637464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.934  [2024-12-10 00:13:14.637471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.934  [2024-12-10 00:13:14.649700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.935  [2024-12-10 00:13:14.650100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.935  [2024-12-10 00:13:14.650118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.935  [2024-12-10 00:13:14.650126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.935  [2024-12-10 00:13:14.650299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.935  [2024-12-10 00:13:14.650469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.935  [2024-12-10 00:13:14.650479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.935  [2024-12-10 00:13:14.650486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.935  [2024-12-10 00:13:14.650492] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.935  [2024-12-10 00:13:14.662637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.935  [2024-12-10 00:13:14.663062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.935  [2024-12-10 00:13:14.663080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.935  [2024-12-10 00:13:14.663087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.935  [2024-12-10 00:13:14.663265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.935  [2024-12-10 00:13:14.663442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.935  [2024-12-10 00:13:14.663452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.935  [2024-12-10 00:13:14.663459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.935  [2024-12-10 00:13:14.663466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.935  [2024-12-10 00:13:14.675481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.935  [2024-12-10 00:13:14.675876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.935  [2024-12-10 00:13:14.675894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.935  [2024-12-10 00:13:14.675901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.935  [2024-12-10 00:13:14.676059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.935  [2024-12-10 00:13:14.676225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.935  [2024-12-10 00:13:14.676235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.935  [2024-12-10 00:13:14.676242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.935  [2024-12-10 00:13:14.676249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.935  [2024-12-10 00:13:14.688481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.935  [2024-12-10 00:13:14.688908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.935  [2024-12-10 00:13:14.688950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.935  [2024-12-10 00:13:14.688975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.935  [2024-12-10 00:13:14.689569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.935  [2024-12-10 00:13:14.689745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.935  [2024-12-10 00:13:14.689754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.935  [2024-12-10 00:13:14.689761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.935  [2024-12-10 00:13:14.689767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.935  [2024-12-10 00:13:14.701411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.935  [2024-12-10 00:13:14.701797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.935  [2024-12-10 00:13:14.701815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.935  [2024-12-10 00:13:14.701822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.935  [2024-12-10 00:13:14.701980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.935  [2024-12-10 00:13:14.702140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.935  [2024-12-10 00:13:14.702149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.935  [2024-12-10 00:13:14.702159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.935  [2024-12-10 00:13:14.702173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.935  [2024-12-10 00:13:14.714174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.935  [2024-12-10 00:13:14.714566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.935  [2024-12-10 00:13:14.714583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.935  [2024-12-10 00:13:14.714590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.935  [2024-12-10 00:13:14.714749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.935  [2024-12-10 00:13:14.714909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.935  [2024-12-10 00:13:14.714918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.935  [2024-12-10 00:13:14.714925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.935  [2024-12-10 00:13:14.714931] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.935  [2024-12-10 00:13:14.727091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.935  [2024-12-10 00:13:14.727442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.935  [2024-12-10 00:13:14.727460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.935  [2024-12-10 00:13:14.727468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.935  [2024-12-10 00:13:14.727627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.935  [2024-12-10 00:13:14.727788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.935  [2024-12-10 00:13:14.727798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.935  [2024-12-10 00:13:14.727804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.935  [2024-12-10 00:13:14.727811] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.935  [2024-12-10 00:13:14.739970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.935  [2024-12-10 00:13:14.740378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.935  [2024-12-10 00:13:14.740396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.935  [2024-12-10 00:13:14.740403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.935  [2024-12-10 00:13:14.740563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.935  [2024-12-10 00:13:14.740722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.935  [2024-12-10 00:13:14.740731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.935  [2024-12-10 00:13:14.740737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.935  [2024-12-10 00:13:14.740744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.935  [2024-12-10 00:13:14.752786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.935  [2024-12-10 00:13:14.753201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.935  [2024-12-10 00:13:14.753219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.935  [2024-12-10 00:13:14.753226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.935  [2024-12-10 00:13:14.753386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.935  [2024-12-10 00:13:14.753547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.935  [2024-12-10 00:13:14.753556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.935  [2024-12-10 00:13:14.753562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.936  [2024-12-10 00:13:14.753569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.936  [2024-12-10 00:13:14.765560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.936  [2024-12-10 00:13:14.765978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.936  [2024-12-10 00:13:14.765995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.936  [2024-12-10 00:13:14.766003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.936  [2024-12-10 00:13:14.766162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.936  [2024-12-10 00:13:14.766352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.936  [2024-12-10 00:13:14.766362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.936  [2024-12-10 00:13:14.766368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.936  [2024-12-10 00:13:14.766374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:58.936  [2024-12-10 00:13:14.778490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:58.936  [2024-12-10 00:13:14.778830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:58.936  [2024-12-10 00:13:14.778847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:58.936  [2024-12-10 00:13:14.778854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:58.936  [2024-12-10 00:13:14.779013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:58.936  [2024-12-10 00:13:14.779180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:58.936  [2024-12-10 00:13:14.779190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:58.936  [2024-12-10 00:13:14.779213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:58.936  [2024-12-10 00:13:14.779221] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.196  [2024-12-10 00:13:14.791620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.196  [2024-12-10 00:13:14.792036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.196  [2024-12-10 00:13:14.792053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.196  [2024-12-10 00:13:14.792064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.196  [2024-12-10 00:13:14.792239] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.196  [2024-12-10 00:13:14.792408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.196  [2024-12-10 00:13:14.792418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.196  [2024-12-10 00:13:14.792425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.196  [2024-12-10 00:13:14.792431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.196  [2024-12-10 00:13:14.804507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.196  [2024-12-10 00:13:14.804912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.196  [2024-12-10 00:13:14.804958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.196  [2024-12-10 00:13:14.804982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.196  [2024-12-10 00:13:14.805580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.196  [2024-12-10 00:13:14.805800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.196  [2024-12-10 00:13:14.805810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.196  [2024-12-10 00:13:14.805818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.196  [2024-12-10 00:13:14.805825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.196  [2024-12-10 00:13:14.819444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.196  [2024-12-10 00:13:14.819964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.197  [2024-12-10 00:13:14.819986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.197  [2024-12-10 00:13:14.819997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.197  [2024-12-10 00:13:14.820259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.197  [2024-12-10 00:13:14.820515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.197  [2024-12-10 00:13:14.820528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.197  [2024-12-10 00:13:14.820538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.197  [2024-12-10 00:13:14.820548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.197  [2024-12-10 00:13:14.832450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.197  [2024-12-10 00:13:14.832867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.197  [2024-12-10 00:13:14.832917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.197  [2024-12-10 00:13:14.832941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.197  [2024-12-10 00:13:14.833540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.197  [2024-12-10 00:13:14.833865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.197  [2024-12-10 00:13:14.833875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.197  [2024-12-10 00:13:14.833881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.197  [2024-12-10 00:13:14.833888] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.197  [2024-12-10 00:13:14.845212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.197  [2024-12-10 00:13:14.845623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.197  [2024-12-10 00:13:14.845641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.197  [2024-12-10 00:13:14.845648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.197  [2024-12-10 00:13:14.845807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.197  [2024-12-10 00:13:14.845967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.197  [2024-12-10 00:13:14.845977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.197  [2024-12-10 00:13:14.845983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.197  [2024-12-10 00:13:14.845989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.197  [2024-12-10 00:13:14.858013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.197  [2024-12-10 00:13:14.858435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.197  [2024-12-10 00:13:14.858481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.197  [2024-12-10 00:13:14.858505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.197  [2024-12-10 00:13:14.859006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.197  [2024-12-10 00:13:14.859173] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.197  [2024-12-10 00:13:14.859181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.197  [2024-12-10 00:13:14.859188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.197  [2024-12-10 00:13:14.859193] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.197  [2024-12-10 00:13:14.870833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.197  [2024-12-10 00:13:14.871217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.197  [2024-12-10 00:13:14.871235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.197  [2024-12-10 00:13:14.871243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.197  [2024-12-10 00:13:14.871434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.197  [2024-12-10 00:13:14.871606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.197  [2024-12-10 00:13:14.871615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.197  [2024-12-10 00:13:14.871624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.197  [2024-12-10 00:13:14.871631] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.197  [2024-12-10 00:13:14.883742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.197  [2024-12-10 00:13:14.884162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.197  [2024-12-10 00:13:14.884187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.197  [2024-12-10 00:13:14.884195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.197  [2024-12-10 00:13:14.884364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.197  [2024-12-10 00:13:14.884538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.197  [2024-12-10 00:13:14.884547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.197  [2024-12-10 00:13:14.884554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.197  [2024-12-10 00:13:14.884560] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.197  [2024-12-10 00:13:14.896605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.197  [2024-12-10 00:13:14.897021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.197  [2024-12-10 00:13:14.897038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.197  [2024-12-10 00:13:14.897046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.197  [2024-12-10 00:13:14.897227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.197  [2024-12-10 00:13:14.897397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.197  [2024-12-10 00:13:14.897407] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.197  [2024-12-10 00:13:14.897414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.197  [2024-12-10 00:13:14.897422] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.197  [2024-12-10 00:13:14.909339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.197  [2024-12-10 00:13:14.909681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.197  [2024-12-10 00:13:14.909726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.197  [2024-12-10 00:13:14.909750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.197  [2024-12-10 00:13:14.910175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.197  [2024-12-10 00:13:14.910361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.197  [2024-12-10 00:13:14.910370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.197  [2024-12-10 00:13:14.910377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.197  [2024-12-10 00:13:14.910384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.197  [2024-12-10 00:13:14.922164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.197  [2024-12-10 00:13:14.922514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.197  [2024-12-10 00:13:14.922532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.198  [2024-12-10 00:13:14.922539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.198  [2024-12-10 00:13:14.922708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.198  [2024-12-10 00:13:14.922877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.198  [2024-12-10 00:13:14.922886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.198  [2024-12-10 00:13:14.922893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.198  [2024-12-10 00:13:14.922899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.198  [2024-12-10 00:13:14.935044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.198  [2024-12-10 00:13:14.935458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.198  [2024-12-10 00:13:14.935477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.198  [2024-12-10 00:13:14.935485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.198  [2024-12-10 00:13:14.935653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.198  [2024-12-10 00:13:14.935823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.198  [2024-12-10 00:13:14.935833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.198  [2024-12-10 00:13:14.935841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.198  [2024-12-10 00:13:14.935849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.198  [2024-12-10 00:13:14.948133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.198  [2024-12-10 00:13:14.948494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.198  [2024-12-10 00:13:14.948512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.198  [2024-12-10 00:13:14.948519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.198  [2024-12-10 00:13:14.948677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.198  [2024-12-10 00:13:14.948837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.198  [2024-12-10 00:13:14.948846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.198  [2024-12-10 00:13:14.948852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.198  [2024-12-10 00:13:14.948858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.198  [2024-12-10 00:13:14.960983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.198  [2024-12-10 00:13:14.961309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.198  [2024-12-10 00:13:14.961327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.198  [2024-12-10 00:13:14.961338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.198  [2024-12-10 00:13:14.961506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.198  [2024-12-10 00:13:14.961675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.198  [2024-12-10 00:13:14.961684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.198  [2024-12-10 00:13:14.961691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.198  [2024-12-10 00:13:14.961697] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.198  [2024-12-10 00:13:14.973897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.198  [2024-12-10 00:13:14.974312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.198  [2024-12-10 00:13:14.974362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.198  [2024-12-10 00:13:14.974386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.198  [2024-12-10 00:13:14.974969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.198  [2024-12-10 00:13:14.975471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.198  [2024-12-10 00:13:14.975481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.198  [2024-12-10 00:13:14.975488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.198  [2024-12-10 00:13:14.975495] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.198  [2024-12-10 00:13:14.986764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.198  [2024-12-10 00:13:14.987133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.198  [2024-12-10 00:13:14.987189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.198  [2024-12-10 00:13:14.987214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.198  [2024-12-10 00:13:14.987792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.198  [2024-12-10 00:13:14.988193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.198  [2024-12-10 00:13:14.988212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.198  [2024-12-10 00:13:14.988226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.198  [2024-12-10 00:13:14.988240] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.198  [2024-12-10 00:13:15.001631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.198  [2024-12-10 00:13:15.002078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.198  [2024-12-10 00:13:15.002101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.198  [2024-12-10 00:13:15.002112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.198  [2024-12-10 00:13:15.002376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.198  [2024-12-10 00:13:15.002634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.198  [2024-12-10 00:13:15.002651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.198  [2024-12-10 00:13:15.002661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.198  [2024-12-10 00:13:15.002670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.198  [2024-12-10 00:13:15.014737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.198  [2024-12-10 00:13:15.015139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.198  [2024-12-10 00:13:15.015157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.198  [2024-12-10 00:13:15.015165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.198  [2024-12-10 00:13:15.015344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.198  [2024-12-10 00:13:15.015517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.198  [2024-12-10 00:13:15.015527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.198  [2024-12-10 00:13:15.015534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.198  [2024-12-10 00:13:15.015541] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.198  [2024-12-10 00:13:15.027649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.198  [2024-12-10 00:13:15.028065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.198  [2024-12-10 00:13:15.028109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.198  [2024-12-10 00:13:15.028133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.198  [2024-12-10 00:13:15.028602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.199  [2024-12-10 00:13:15.028764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.199  [2024-12-10 00:13:15.028772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.199  [2024-12-10 00:13:15.028778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.199  [2024-12-10 00:13:15.028784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.199  [2024-12-10 00:13:15.040520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.199  [2024-12-10 00:13:15.040943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.199  [2024-12-10 00:13:15.040989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.199  [2024-12-10 00:13:15.041013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.199  [2024-12-10 00:13:15.041477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.199  [2024-12-10 00:13:15.041648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.199  [2024-12-10 00:13:15.041658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.199  [2024-12-10 00:13:15.041664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.199  [2024-12-10 00:13:15.041674] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.459  [2024-12-10 00:13:15.053620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.459  [2024-12-10 00:13:15.054048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.459  [2024-12-10 00:13:15.054065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.459  [2024-12-10 00:13:15.054073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.459  [2024-12-10 00:13:15.054253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.459  [2024-12-10 00:13:15.054427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.459  [2024-12-10 00:13:15.054437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.459  [2024-12-10 00:13:15.054444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.459  [2024-12-10 00:13:15.054461] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.459  [2024-12-10 00:13:15.066434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.459  [2024-12-10 00:13:15.066779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.459  [2024-12-10 00:13:15.066796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.459  [2024-12-10 00:13:15.066804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.459  [2024-12-10 00:13:15.066962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.459  [2024-12-10 00:13:15.067122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.459  [2024-12-10 00:13:15.067131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.459  [2024-12-10 00:13:15.067138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.459  [2024-12-10 00:13:15.067144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.459  [2024-12-10 00:13:15.079273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.459  [2024-12-10 00:13:15.079707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.459  [2024-12-10 00:13:15.079751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.459  [2024-12-10 00:13:15.079775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.459  [2024-12-10 00:13:15.080285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.459  [2024-12-10 00:13:15.080447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.459  [2024-12-10 00:13:15.080456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.459  [2024-12-10 00:13:15.080463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.459  [2024-12-10 00:13:15.080470] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.459  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3243768 Killed                  "${NVMF_APP[@]}" "$@"
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:59.459  [2024-12-10 00:13:15.092306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.459  [2024-12-10 00:13:15.092729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.459  [2024-12-10 00:13:15.092747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.459  [2024-12-10 00:13:15.092756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.459  [2024-12-10 00:13:15.092929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.459  [2024-12-10 00:13:15.093105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.459  [2024-12-10 00:13:15.093115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.459  [2024-12-10 00:13:15.093123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.459  [2024-12-10 00:13:15.093129] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3245123
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3245123
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3245123 ']'
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:31:59.459  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable
00:31:59.459   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:59.459  [2024-12-10 00:13:15.105360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.459  [2024-12-10 00:13:15.105810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.459  [2024-12-10 00:13:15.105827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.459  [2024-12-10 00:13:15.105836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.459  [2024-12-10 00:13:15.106009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.459  [2024-12-10 00:13:15.106191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.460  [2024-12-10 00:13:15.106200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.460  [2024-12-10 00:13:15.106207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.460  [2024-12-10 00:13:15.106213] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.460  [2024-12-10 00:13:15.118422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.460  [2024-12-10 00:13:15.118832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.460  [2024-12-10 00:13:15.118850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.460  [2024-12-10 00:13:15.118858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.460  [2024-12-10 00:13:15.119031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.460  [2024-12-10 00:13:15.119213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.460  [2024-12-10 00:13:15.119223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.460  [2024-12-10 00:13:15.119230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.460  [2024-12-10 00:13:15.119238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.460  [2024-12-10 00:13:15.131440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.460  [2024-12-10 00:13:15.131866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.460  [2024-12-10 00:13:15.131885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.460  [2024-12-10 00:13:15.131893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.460  [2024-12-10 00:13:15.132067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.460  [2024-12-10 00:13:15.132251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.460  [2024-12-10 00:13:15.132262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.460  [2024-12-10 00:13:15.132269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.460  [2024-12-10 00:13:15.132276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.460  [2024-12-10 00:13:15.143737] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:31:59.460  [2024-12-10 00:13:15.143778] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:31:59.460  [2024-12-10 00:13:15.144393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.460  [2024-12-10 00:13:15.144761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.460  [2024-12-10 00:13:15.144778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.460  [2024-12-10 00:13:15.144787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.460  [2024-12-10 00:13:15.144956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.460  [2024-12-10 00:13:15.145127] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.460  [2024-12-10 00:13:15.145136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.460  [2024-12-10 00:13:15.145143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.460  [2024-12-10 00:13:15.145151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.460  [2024-12-10 00:13:15.157455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.460  [2024-12-10 00:13:15.157869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.460  [2024-12-10 00:13:15.157888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.460  [2024-12-10 00:13:15.157896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.460  [2024-12-10 00:13:15.158070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.460  [2024-12-10 00:13:15.158252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.460  [2024-12-10 00:13:15.158262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.460  [2024-12-10 00:13:15.158271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.460  [2024-12-10 00:13:15.158278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.460  [2024-12-10 00:13:15.170510] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.460  [2024-12-10 00:13:15.170848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.460  [2024-12-10 00:13:15.170866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.460  [2024-12-10 00:13:15.170874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.460  [2024-12-10 00:13:15.171047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.460  [2024-12-10 00:13:15.171231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.460  [2024-12-10 00:13:15.171241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.460  [2024-12-10 00:13:15.171248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.460  [2024-12-10 00:13:15.171256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.460  [2024-12-10 00:13:15.183481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.460  [2024-12-10 00:13:15.183840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.460  [2024-12-10 00:13:15.183858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.460  [2024-12-10 00:13:15.183866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.460  [2024-12-10 00:13:15.184035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.460  [2024-12-10 00:13:15.184229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.460  [2024-12-10 00:13:15.184239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.460  [2024-12-10 00:13:15.184246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.460  [2024-12-10 00:13:15.184253] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.460  [2024-12-10 00:13:15.196394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.460  [2024-12-10 00:13:15.196761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.460  [2024-12-10 00:13:15.196783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.460  [2024-12-10 00:13:15.196791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.461  [2024-12-10 00:13:15.196961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.461  [2024-12-10 00:13:15.197130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.461  [2024-12-10 00:13:15.197141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.461  [2024-12-10 00:13:15.197148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.461  [2024-12-10 00:13:15.197155] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.461  [2024-12-10 00:13:15.209472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.461  [2024-12-10 00:13:15.209882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.461  [2024-12-10 00:13:15.209900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.461  [2024-12-10 00:13:15.209908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.461  [2024-12-10 00:13:15.210081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.461  [2024-12-10 00:13:15.210264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.461  [2024-12-10 00:13:15.210274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.461  [2024-12-10 00:13:15.210281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.461  [2024-12-10 00:13:15.210288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.461  [2024-12-10 00:13:15.222487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.461  [2024-12-10 00:13:15.222845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.461  [2024-12-10 00:13:15.222862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.461  [2024-12-10 00:13:15.222870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.461  [2024-12-10 00:13:15.223038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.461  [2024-12-10 00:13:15.223216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.461  [2024-12-10 00:13:15.223227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.461  [2024-12-10 00:13:15.223235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.461  [2024-12-10 00:13:15.223243] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.461  [2024-12-10 00:13:15.223434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:31:59.461  [2024-12-10 00:13:15.235523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.461  [2024-12-10 00:13:15.235947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.461  [2024-12-10 00:13:15.235966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.461  [2024-12-10 00:13:15.235974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.461  [2024-12-10 00:13:15.236147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.461  [2024-12-10 00:13:15.236327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.461  [2024-12-10 00:13:15.236338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.461  [2024-12-10 00:13:15.236345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.461  [2024-12-10 00:13:15.236353] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.461  [2024-12-10 00:13:15.248459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.461  [2024-12-10 00:13:15.248893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.461  [2024-12-10 00:13:15.248912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.461  [2024-12-10 00:13:15.248920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.461  [2024-12-10 00:13:15.249089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.461  [2024-12-10 00:13:15.249267] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.461  [2024-12-10 00:13:15.249277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.461  [2024-12-10 00:13:15.249284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.461  [2024-12-10 00:13:15.249291] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.461  [2024-12-10 00:13:15.261353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.461  [2024-12-10 00:13:15.261758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.461  [2024-12-10 00:13:15.261776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.461  [2024-12-10 00:13:15.261784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.461  [2024-12-10 00:13:15.261952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.461  [2024-12-10 00:13:15.262125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.461  [2024-12-10 00:13:15.262135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.461  [2024-12-10 00:13:15.262143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.461  [2024-12-10 00:13:15.262150] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.461  [2024-12-10 00:13:15.262703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:31:59.461  [2024-12-10 00:13:15.262730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:31:59.461  [2024-12-10 00:13:15.262737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:31:59.461  [2024-12-10 00:13:15.262745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:31:59.461  [2024-12-10 00:13:15.262751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:31:59.461  [2024-12-10 00:13:15.263998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:31:59.461  [2024-12-10 00:13:15.264109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:31:59.461  [2024-12-10 00:13:15.264110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:31:59.461  [2024-12-10 00:13:15.274368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.461  [2024-12-10 00:13:15.274825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.461  [2024-12-10 00:13:15.274847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.461  [2024-12-10 00:13:15.274856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.461  [2024-12-10 00:13:15.275032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.461  [2024-12-10 00:13:15.275216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.461  [2024-12-10 00:13:15.275227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.461  [2024-12-10 00:13:15.275235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.461  [2024-12-10 00:13:15.275244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.461  [2024-12-10 00:13:15.287468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.461  [2024-12-10 00:13:15.287934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.461  [2024-12-10 00:13:15.287957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.461  [2024-12-10 00:13:15.287965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.461  [2024-12-10 00:13:15.288141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.462  [2024-12-10 00:13:15.288326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.462  [2024-12-10 00:13:15.288337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.462  [2024-12-10 00:13:15.288345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.462  [2024-12-10 00:13:15.288353] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.462  [2024-12-10 00:13:15.300583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.462  [2024-12-10 00:13:15.301045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.462  [2024-12-10 00:13:15.301067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.462  [2024-12-10 00:13:15.301075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.462  [2024-12-10 00:13:15.301257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.462  [2024-12-10 00:13:15.301435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.462  [2024-12-10 00:13:15.301444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.462  [2024-12-10 00:13:15.301452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.462  [2024-12-10 00:13:15.301459] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.462  [2024-12-10 00:13:15.313689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.462  [2024-12-10 00:13:15.314122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.462  [2024-12-10 00:13:15.314151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.462  [2024-12-10 00:13:15.314160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.462  [2024-12-10 00:13:15.314343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.462  [2024-12-10 00:13:15.314520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.462  [2024-12-10 00:13:15.314531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.462  [2024-12-10 00:13:15.314538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.462  [2024-12-10 00:13:15.314547] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.721  [2024-12-10 00:13:15.326760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.721  [2024-12-10 00:13:15.327187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.721  [2024-12-10 00:13:15.327208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.721  [2024-12-10 00:13:15.327217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.721  [2024-12-10 00:13:15.327391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.721  [2024-12-10 00:13:15.327567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.721  [2024-12-10 00:13:15.327577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.721  [2024-12-10 00:13:15.327585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.721  [2024-12-10 00:13:15.327592] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.721  [2024-12-10 00:13:15.339831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.721  [2024-12-10 00:13:15.340244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.721  [2024-12-10 00:13:15.340263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.722  [2024-12-10 00:13:15.340271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.722  [2024-12-10 00:13:15.340446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.722  [2024-12-10 00:13:15.340621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.722  [2024-12-10 00:13:15.340631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.722  [2024-12-10 00:13:15.340638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.722  [2024-12-10 00:13:15.340645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.722  [2024-12-10 00:13:15.352871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.722  [2024-12-10 00:13:15.353232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.722  [2024-12-10 00:13:15.353250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.722  [2024-12-10 00:13:15.353259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.722  [2024-12-10 00:13:15.353433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.722  [2024-12-10 00:13:15.353613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.722  [2024-12-10 00:13:15.353623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.722  [2024-12-10 00:13:15.353631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.722  [2024-12-10 00:13:15.353637] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.722   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:31:59.722   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0
00:31:59.722   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:31:59.722  [2024-12-10 00:13:15.365851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.722   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable
00:31:59.722   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:59.722  [2024-12-10 00:13:15.366288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.722  [2024-12-10 00:13:15.366308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.722  [2024-12-10 00:13:15.366316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.722  [2024-12-10 00:13:15.366489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.722  [2024-12-10 00:13:15.366664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.722  [2024-12-10 00:13:15.366674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.722  [2024-12-10 00:13:15.366681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.722  [2024-12-10 00:13:15.366687] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.722  [2024-12-10 00:13:15.378905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.722  [2024-12-10 00:13:15.379247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.722  [2024-12-10 00:13:15.379265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.722  [2024-12-10 00:13:15.379274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.722  [2024-12-10 00:13:15.379447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.722  [2024-12-10 00:13:15.379622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.722  [2024-12-10 00:13:15.379632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.722  [2024-12-10 00:13:15.379639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.722  [2024-12-10 00:13:15.379645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.722  [2024-12-10 00:13:15.392038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.722  [2024-12-10 00:13:15.392378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.722  [2024-12-10 00:13:15.392396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.722  [2024-12-10 00:13:15.392404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.722  [2024-12-10 00:13:15.392581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.722  [2024-12-10 00:13:15.392758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.722  [2024-12-10 00:13:15.392768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.722  [2024-12-10 00:13:15.392774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.722  [2024-12-10 00:13:15.392783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.722   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:31:59.722   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:31:59.722   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:59.722   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:59.722  [2024-12-10 00:13:15.405161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.722  [2024-12-10 00:13:15.405567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.722  [2024-12-10 00:13:15.405584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.722  [2024-12-10 00:13:15.405592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.722  [2024-12-10 00:13:15.405766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.722  [2024-12-10 00:13:15.405941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.722  [2024-12-10 00:13:15.405950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.722  [2024-12-10 00:13:15.405958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.722  [2024-12-10 00:13:15.405965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.722  [2024-12-10 00:13:15.407827] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:31:59.722   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:59.722   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:31:59.722   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:59.722   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:59.722  [2024-12-10 00:13:15.418191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.722  [2024-12-10 00:13:15.418590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.722  [2024-12-10 00:13:15.418608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.722  [2024-12-10 00:13:15.418616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.722  [2024-12-10 00:13:15.418789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.722  [2024-12-10 00:13:15.418964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.722  [2024-12-10 00:13:15.418974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.722  [2024-12-10 00:13:15.418981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.722  [2024-12-10 00:13:15.418992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.722  [2024-12-10 00:13:15.431214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.723  [2024-12-10 00:13:15.431644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.723  [2024-12-10 00:13:15.431662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.723  [2024-12-10 00:13:15.431670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.723  [2024-12-10 00:13:15.431845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.723  [2024-12-10 00:13:15.432020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.723  [2024-12-10 00:13:15.432030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.723  [2024-12-10 00:13:15.432037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.723  [2024-12-10 00:13:15.432043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.723  Malloc0
00:31:59.723   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:59.723   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:31:59.723   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:59.723   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:59.723  [2024-12-10 00:13:15.444289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.723  [2024-12-10 00:13:15.444712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.723  [2024-12-10 00:13:15.444730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.723  [2024-12-10 00:13:15.444738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.723  [2024-12-10 00:13:15.444911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.723  [2024-12-10 00:13:15.445086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.723  [2024-12-10 00:13:15.445096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.723  [2024-12-10 00:13:15.445104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.723  [2024-12-10 00:13:15.445111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.723       5018.50 IOPS,    19.60 MiB/s
[2024-12-09T23:13:15.580Z]  00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:59.723   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:31:59.723   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:59.723   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:59.723  [2024-12-10 00:13:15.457365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.723  [2024-12-10 00:13:15.457825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:31:59.723  [2024-12-10 00:13:15.457843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7c7e0 with addr=10.0.0.2, port=4420
00:31:59.723  [2024-12-10 00:13:15.457851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7c7e0 is same with the state(6) to be set
00:31:59.723  [2024-12-10 00:13:15.458029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7c7e0 (9): Bad file descriptor
00:31:59.723  [2024-12-10 00:13:15.458210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state
00:31:59.723  [2024-12-10 00:13:15.458221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed
00:31:59.723  [2024-12-10 00:13:15.458227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state.
00:31:59.723  [2024-12-10 00:13:15.458234] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed.
00:31:59.723   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:59.723   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:31:59.723   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:31:59.723   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:31:59.723  [2024-12-10 00:13:15.465513] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:31:59.723   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:31:59.723  [2024-12-10 00:13:15.470467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller
00:31:59.723   00:13:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3244229
00:31:59.723  [2024-12-10 00:13:15.532376] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful.
00:32:01.601       5811.29 IOPS,    22.70 MiB/s
[2024-12-09T23:13:18.485Z]      6502.25 IOPS,    25.40 MiB/s
[2024-12-09T23:13:19.486Z]      7053.56 IOPS,    27.55 MiB/s
[2024-12-09T23:13:20.874Z]      7513.10 IOPS,    29.35 MiB/s
[2024-12-09T23:13:21.811Z]      7887.55 IOPS,    30.81 MiB/s
[2024-12-09T23:13:22.752Z]      8176.17 IOPS,    31.94 MiB/s
[2024-12-09T23:13:23.691Z]      8431.92 IOPS,    32.94 MiB/s
[2024-12-09T23:13:24.626Z]      8656.21 IOPS,    33.81 MiB/s
[2024-12-09T23:13:24.626Z]      8852.73 IOPS,    34.58 MiB/s
00:32:08.769                                                                                                  Latency(us)
00:32:08.769  
[2024-12-09T23:13:24.626Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:32:08.769  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:32:08.769  	 Verification LBA range: start 0x0 length 0x4000
00:32:08.769  	 Nvme1n1             :      15.00    8858.33      34.60   11063.47     0.00    6405.51     436.91   14293.09
00:32:08.769  
[2024-12-09T23:13:24.626Z]  ===================================================================================================================
00:32:08.769  
[2024-12-09T23:13:24.626Z]  Total                       :               8858.33      34.60   11063.47     0.00    6405.51     436.91   14293.09
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20}
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:32:09.028  rmmod nvme_tcp
00:32:09.028  rmmod nvme_fabrics
00:32:09.028  rmmod nvme_keyring
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3245123 ']'
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3245123
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3245123 ']'
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3245123
00:32:09.028    00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:09.028    00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3245123
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3245123'
00:32:09.028  killing process with pid 3245123
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3245123
00:32:09.028   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3245123
00:32:09.288   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:32:09.288   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:32:09.288   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:32:09.288   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr
00:32:09.288   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:32:09.288   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save
00:32:09.288   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore
00:32:09.288   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:32:09.288   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns
00:32:09.288   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:09.288   00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:09.288    00:13:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:11.193   00:13:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:32:11.193  
00:32:11.194  real	0m25.915s
00:32:11.194  user	1m0.238s
00:32:11.194  sys	0m6.705s
00:32:11.194   00:13:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:11.194   00:13:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x
00:32:11.194  ************************************
00:32:11.194  END TEST nvmf_bdevperf
00:32:11.194  ************************************
00:32:11.453   00:13:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp
00:32:11.453   00:13:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:32:11.453   00:13:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:11.453   00:13:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:32:11.453  ************************************
00:32:11.453  START TEST nvmf_target_disconnect
00:32:11.453  ************************************
00:32:11.453   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp
00:32:11.453  * Looking for test storage...
00:32:11.453  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:32:11.453     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version
00:32:11.453     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-:
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-:
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<'
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 ))
00:32:11.453    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:32:11.453     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1
00:32:11.453     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1
00:32:11.453     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:32:11.454     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1
00:32:11.454     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2
00:32:11.454     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2
00:32:11.454     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:32:11.454     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:32:11.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:11.454  		--rc genhtml_branch_coverage=1
00:32:11.454  		--rc genhtml_function_coverage=1
00:32:11.454  		--rc genhtml_legend=1
00:32:11.454  		--rc geninfo_all_blocks=1
00:32:11.454  		--rc geninfo_unexecuted_blocks=1
00:32:11.454  		
00:32:11.454  		'
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:32:11.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:11.454  		--rc genhtml_branch_coverage=1
00:32:11.454  		--rc genhtml_function_coverage=1
00:32:11.454  		--rc genhtml_legend=1
00:32:11.454  		--rc geninfo_all_blocks=1
00:32:11.454  		--rc geninfo_unexecuted_blocks=1
00:32:11.454  		
00:32:11.454  		'
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:32:11.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:11.454  		--rc genhtml_branch_coverage=1
00:32:11.454  		--rc genhtml_function_coverage=1
00:32:11.454  		--rc genhtml_legend=1
00:32:11.454  		--rc geninfo_all_blocks=1
00:32:11.454  		--rc geninfo_unexecuted_blocks=1
00:32:11.454  		
00:32:11.454  		'
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:32:11.454  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:11.454  		--rc genhtml_branch_coverage=1
00:32:11.454  		--rc genhtml_function_coverage=1
00:32:11.454  		--rc genhtml_legend=1
00:32:11.454  		--rc geninfo_all_blocks=1
00:32:11.454  		--rc geninfo_unexecuted_blocks=1
00:32:11.454  		
00:32:11.454  		'
00:32:11.454   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:32:11.454     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:32:11.454     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:32:11.454     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob
00:32:11.454     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:32:11.454     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:32:11.454     00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:32:11.454      00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:11.454      00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:11.454      00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:11.454      00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH
00:32:11.454      00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:32:11.454  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:32:11.454    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0
00:32:11.454   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme
00:32:11.454   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64
00:32:11.714   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512
00:32:11.714   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit
00:32:11.714   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:32:11.714   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:32:11.714   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs
00:32:11.714   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no
00:32:11.714   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns
00:32:11.714   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:11.714   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:11.714    00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:11.714   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:32:11.714   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:32:11.714   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable
00:32:11.714   00:13:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=()
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=()
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=()
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=()
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=()
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=()
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=()
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:32:18.284   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:32:18.285  Found 0000:af:00.0 (0x8086 - 0x159b)
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:32:18.285  Found 0000:af:00.1 (0x8086 - 0x159b)
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:32:18.285  Found net devices under 0000:af:00.0: cvl_0_0
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:32:18.285  Found net devices under 0000:af:00.1: cvl_0_1
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:32:18.285   00:13:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:32:18.285   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:32:18.285   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:32:18.285   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:32:18.285   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:32:18.285   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:32:18.285   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:32:18.285   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:32:18.285   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:32:18.285  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:32:18.285  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms
00:32:18.285  
00:32:18.285  --- 10.0.0.2 ping statistics ---
00:32:18.285  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:18.285  rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms
00:32:18.285   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:32:18.285  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:32:18.285  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms
00:32:18.285  
00:32:18.285  --- 10.0.0.1 ping statistics ---
00:32:18.285  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:18.285  rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms
00:32:18.285   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x
00:32:18.286  ************************************
00:32:18.286  START TEST nvmf_target_disconnect_tc1
00:32:18.286  ************************************
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:32:18.286    00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:32:18.286    00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]]
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:32:18.286  [2024-12-10 00:13:33.340149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:18.286  [2024-12-10 00:13:33.340202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11190b0 with addr=10.0.0.2, port=4420
00:32:18.286  [2024-12-10 00:13:33.340222] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:32:18.286  [2024-12-10 00:13:33.340232] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:32:18.286  [2024-12-10 00:13:33.340239] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed
00:32:18.286  spdk_nvme_probe() failed for transport address '10.0.0.2'
00:32:18.286  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred
00:32:18.286  Initializing NVMe Controllers
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:32:18.286  
00:32:18.286  real	0m0.122s
00:32:18.286  user	0m0.048s
00:32:18.286  sys	0m0.073s
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x
00:32:18.286  ************************************
00:32:18.286  END TEST nvmf_target_disconnect_tc1
00:32:18.286  ************************************
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x
00:32:18.286  ************************************
00:32:18.286  START TEST nvmf_target_disconnect_tc2
00:32:18.286  ************************************
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3250200
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3250200
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3250200 ']'
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:18.286  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:18.286   00:13:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:18.286  [2024-12-10 00:13:33.480117] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:32:18.286  [2024-12-10 00:13:33.480158] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:32:18.286  [2024-12-10 00:13:33.559023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:32:18.286  [2024-12-10 00:13:33.598000] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:32:18.286  [2024-12-10 00:13:33.598041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:32:18.286  [2024-12-10 00:13:33.598048] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:32:18.286  [2024-12-10 00:13:33.598054] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:32:18.286  [2024-12-10 00:13:33.598058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:32:18.286  [2024-12-10 00:13:33.599641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:32:18.286  [2024-12-10 00:13:33.599745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:32:18.286  [2024-12-10 00:13:33.599828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:32:18.286  [2024-12-10 00:13:33.599829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7
00:32:18.545   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:18.545   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0
00:32:18.545   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:32:18.545   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:18.545   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:18.545   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:32:18.545   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:32:18.546   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:18.546   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:18.546  Malloc0
00:32:18.546   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:18.546   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o
00:32:18.546   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:18.546   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:18.804  [2024-12-10 00:13:34.405717] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:32:18.804   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:18.804   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:32:18.804   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:18.804   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:18.804   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:18.804   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:32:18.804   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:18.804   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:18.804   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:18.805   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:32:18.805   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:18.805   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:18.805  [2024-12-10 00:13:34.434657] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:32:18.805   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:18.805   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:32:18.805   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:18.805   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:18.805   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:18.805   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3250428
00:32:18.805   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2
00:32:18.805   00:13:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:32:20.719   00:13:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3250200
00:32:20.719   00:13:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  [2024-12-10 00:13:36.463633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  [2024-12-10 00:13:36.463833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Read completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.719  Write completed with error (sct=0, sc=8)
00:32:20.719  starting I/O failed
00:32:20.720  Write completed with error (sct=0, sc=8)
00:32:20.720  starting I/O failed
00:32:20.720  Write completed with error (sct=0, sc=8)
00:32:20.720  starting I/O failed
00:32:20.720  Read completed with error (sct=0, sc=8)
00:32:20.720  starting I/O failed
00:32:20.720  Write completed with error (sct=0, sc=8)
00:32:20.720  starting I/O failed
00:32:20.720  Write completed with error (sct=0, sc=8)
00:32:20.720  starting I/O failed
00:32:20.720  [2024-12-10 00:13:36.464032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:20.720  [2024-12-10 00:13:36.464286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.464309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.464453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.464463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.464633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.464664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.464810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.464844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.464992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.465026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.465237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.465271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.465402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.465436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.465677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.465709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.465975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.466009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.466218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.466261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.466389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.466402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.466585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.466618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.466821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.466854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.467053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.467086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.467221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.467255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.467404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.467438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.467677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.467710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.468043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.468062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.468304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.468317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.468531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.468544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.468684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.468696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.468848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.468861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.469052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.469065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.469153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.469164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.469325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.469337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.469418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.469436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.469533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.469544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.469686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.469699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.469951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.720  [2024-12-10 00:13:36.469984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.720  qpair failed and we were unable to recover it.
00:32:20.720  [2024-12-10 00:13:36.470289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.470325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.470469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.470502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.470711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.470744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.470993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.471027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.471203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.471238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.471421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.471455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.471642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.471673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.471952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.471982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.472217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.472248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.472487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.472518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.472758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.472789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.473041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.473071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.473265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.473296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.473534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.473565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.473806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.473836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.474017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.474046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.474217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.474249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.474430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.474461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.474652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.474683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.474852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.474882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.475157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.475197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.475434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.475464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.475653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.475683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.475858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.475890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.476136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.476177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.476429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.476463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.476637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.476670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.476855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.476888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.477129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.477162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.477310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.477345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.477532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.477566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.721  [2024-12-10 00:13:36.477691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.721  [2024-12-10 00:13:36.477724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.721  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.477923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.477954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.478187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.478218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.478384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.478415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.478581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.478612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.478787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.478826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.479007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.479040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.479300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.479347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.479570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.479600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.479767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.479798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.479977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.480009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.480196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.480230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.480338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.480372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.480555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.480587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.480725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.480758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.481047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.481080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.481194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.481227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.481413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.481447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.481643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.481677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.481793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.481827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.482039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.482072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.482252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.482287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.482422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.482455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.482647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.482680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.482857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.482890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.483070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.483103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.483319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.483353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.483647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.483680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.483797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.483830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.483945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.483979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.484214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.484248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.484360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.484394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.722  qpair failed and we were unable to recover it.
00:32:20.722  [2024-12-10 00:13:36.484576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.722  [2024-12-10 00:13:36.484610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.484748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.484781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.484950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.484983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.485205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.485239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.485416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.485451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.485582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.485615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.485799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.485832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.486023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.486057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.486233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.486266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.486385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.486418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.486593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.486626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.486747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.486780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.486966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.486999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.487184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.487223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.487350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.487383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.487591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.487623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.487741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.487773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.487946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.487980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.488160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.488205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.488392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.488425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.488544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.488578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.488792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.488824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.489066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.489099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.489243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.489278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.489460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.489494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.489672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.489705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.489884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.489918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.490036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.490072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.490185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.490218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.490390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.490424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.490641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.490675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.490805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.723  [2024-12-10 00:13:36.490838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.723  qpair failed and we were unable to recover it.
00:32:20.723  [2024-12-10 00:13:36.490957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.490991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.491188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.491222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.491410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.491444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.491622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.491656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.491760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.491794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.491917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.491950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.492179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.492213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.492406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.492439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.492711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.492745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.492879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.492913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.493026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.493060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.493237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.493271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.493486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.493519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.493634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.493667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.493854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.493887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.494158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.494202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.494381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.494414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.494596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.494629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.494798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.494831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.494957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.494991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.495125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.495158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.495293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.495332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.495526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.495560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.495732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.495765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.496028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.496061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.496230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.496264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.496380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.496414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.724  [2024-12-10 00:13:36.496522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.724  [2024-12-10 00:13:36.496557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.724  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.496822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.496855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.497094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.497127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.497330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.497365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.497607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.497640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.497832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.497865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.498060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.498093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.498270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.498305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.498574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.498608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.498849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.498882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.499011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.499045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.499235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.499269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.499386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.499419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.499610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.499643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.499858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.499892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.500063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.500096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.500211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.500245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.500421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.500454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.500719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.500752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.500866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.500899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.501093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.501127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.501351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.501386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.501602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.501634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.501826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.501859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.502042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.502075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.502277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.502311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.502492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.502525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.502725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.502758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.502942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.502974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.503159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.503201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.503439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.503472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.503589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.503621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.725  [2024-12-10 00:13:36.503801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.725  [2024-12-10 00:13:36.503834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.725  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.504075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.504108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.504301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.504342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.504470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.504503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.504692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.504725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.504842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.504876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.505000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.505033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.505185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.505220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.505442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.505475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.505721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.505754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.505957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.505990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.506161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.506206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.506396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.506429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.506608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.506642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.506849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.506882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.507052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.507085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.507261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.507297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.507514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.507547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.507731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.507764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.507888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.507921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.508122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.508156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.508351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.508385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.508572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.508605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.508723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.508756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.508940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.508974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.509099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.509132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.509265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.509306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.509435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.509469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.509751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.509783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.510028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.510062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.510326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.510360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.510552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.510585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.726  [2024-12-10 00:13:36.510703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.726  [2024-12-10 00:13:36.510736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.726  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.510857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.510890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.511150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.511195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.511380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.511414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.511585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.511619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.511790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.511823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.512026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.512059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.512244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.512278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.512397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.512431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.512554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.512587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.512773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.512816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.512932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.512966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.513152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.513206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.513326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.513360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.513568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.513601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.513873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.513906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.514076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.514110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.514238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.514273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.514394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.514427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.514630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.514663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.514856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.514889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.515151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.515193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.515437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.515470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.515648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.515681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.515805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.515839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.515953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.515986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.516122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.516155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.516431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.516465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.516679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.516712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.516889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.516923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.517137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.517190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.517302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.517335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.517506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.727  [2024-12-10 00:13:36.517539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.727  qpair failed and we were unable to recover it.
00:32:20.727  [2024-12-10 00:13:36.517658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.517691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.517901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.517934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.518048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.518081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.518193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.518229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.518447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.518481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.518764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.518797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.518925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.518959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.519152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.519195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.519315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.519349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.519614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.519647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.519835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.519869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.520002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.520035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.520280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.520314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.520498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.520532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.520664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.520697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.520883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.520916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.521157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.521200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.521446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.521483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.521603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.521637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.521908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.521941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.522187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.522221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.522436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.522470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.522651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.522685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.522826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.522860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.522985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.523018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.523259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.523294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.523469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.523503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.523623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.523657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.523791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.523823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.524009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.524043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.524246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.524280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.524409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.728  [2024-12-10 00:13:36.524442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.728  qpair failed and we were unable to recover it.
00:32:20.728  [2024-12-10 00:13:36.524613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.524646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.524884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.524917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.525095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.525128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.525416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.525450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.525623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.525655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.525934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.525968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.526208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.526242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.526420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.526453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.526769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.526803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.526976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.527010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.527193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.527228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.527409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.527442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.527640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.527673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.527843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.527876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.528003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.528036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.528229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.528264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.528450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.528483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.528667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.528700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.528907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.528941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.529077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.529111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.529245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.529279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.529465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.529499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.529620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.529653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.529841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.529875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.530064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.530098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.530281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.530321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.530425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.530458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.530709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.530742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.729  [2024-12-10 00:13:36.530932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.729  [2024-12-10 00:13:36.530966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.729  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.531158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.531202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.531391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.531425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.531673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.531707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.531973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.532006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.532200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.532236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.532499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.532532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.532662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.532696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.532876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.532909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.533080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.533113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.533238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.533272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.533520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.533553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.533735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.533769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.533901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.533935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.534186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.534219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.534436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.534469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.534675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.534709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.534899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.534932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.535178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.535212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.535415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.535449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.535705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.535738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.536017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.536050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.536186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.536220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.536397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.536431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.536608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.536682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.536893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.536930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.537052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.537085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.537270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.537306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.537522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.537556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.537749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.537781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.537957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.537990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.538232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.538269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.538388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.730  [2024-12-10 00:13:36.538421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.730  qpair failed and we were unable to recover it.
00:32:20.730  [2024-12-10 00:13:36.538538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.538571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.538757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.538791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.539056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.539090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.539273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.539308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.539548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.539581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.539788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.539822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.540071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.540105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.540323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.540357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.540462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.540495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.540682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.540716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.540888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.540922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.541102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.541135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.541393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.541429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.541606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.541639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.541831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.541864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.542042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.542075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.542194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.542228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.542471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.542504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.542694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.542732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.542974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.543007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.543269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.543304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.543425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.543458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.543641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.543673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.543861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.543893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.544155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.544197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.544380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.544413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.544630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.544663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.544833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.544866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.545004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.545038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.545216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.545250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.545356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.545389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.545504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.731  [2024-12-10 00:13:36.545537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.731  qpair failed and we were unable to recover it.
00:32:20.731  [2024-12-10 00:13:36.545701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.545734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.545848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.545881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.546006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.546039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.546306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.546340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.546452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.546484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.546612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.546646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.546888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.546922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.547031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.547069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.547332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.547366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.547627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.547660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.547782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.547816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.547939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.547972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.548086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.548120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.548330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.548366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.548554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.548588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.548699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.548731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.548910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.548943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.549116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.549150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.549352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.549385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.549586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.549619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.549728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.549760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.549962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.549995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.550205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.550239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.550476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.550510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.550800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.550833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.550967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.551000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.551199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.551239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.551412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.551446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.551637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.551670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.551846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.551879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.552124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.552157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.552354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.732  [2024-12-10 00:13:36.552388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.732  qpair failed and we were unable to recover it.
00:32:20.732  [2024-12-10 00:13:36.552596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.552628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.552757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.552790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.552959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.552992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.553245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.553280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.553466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.553499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.553696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.553729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.553911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.553945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.554082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.554115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.554259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.554295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.554487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.554520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.554698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.554731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.554899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.554933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.555115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.555149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.555335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.555369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.555619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.555653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.555858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.555891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.556067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.556100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.556325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.556360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.556571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.556604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.556792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.556826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.556950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.556983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.557095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.557129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.557397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.557432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.557671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.557704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.557945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.557978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.558248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.558297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.558488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.558521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.558759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.558793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.559055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.559088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.559266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.559300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.559484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.559517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.559726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.733  [2024-12-10 00:13:36.559759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.733  qpair failed and we were unable to recover it.
00:32:20.733  [2024-12-10 00:13:36.559898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.559931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.560046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.560079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.560250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.560291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.560511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.560543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.560672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.560706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.560967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.561000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.561245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.561280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.561471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.561504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.561750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.561782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.561966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.561999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.562267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.562301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.562489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.562523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.562711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.562744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.562977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.563010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.563195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.563229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.563468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.563501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.563702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.563736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.563972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.564005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.564208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.564242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.564430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.564463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.564725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.564758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.565018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.565051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.565250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.565284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.565523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.565557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.565731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.565765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.565968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.566001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.566183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.566217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.566409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.566442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.566565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.566598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.566724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.566758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.567020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.567053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.567175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.734  [2024-12-10 00:13:36.567209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.734  qpair failed and we were unable to recover it.
00:32:20.734  [2024-12-10 00:13:36.567449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.735  [2024-12-10 00:13:36.567482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.735  qpair failed and we were unable to recover it.
00:32:20.735  [2024-12-10 00:13:36.567673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.735  [2024-12-10 00:13:36.567706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.735  qpair failed and we were unable to recover it.
00:32:20.735  [2024-12-10 00:13:36.567900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.735  [2024-12-10 00:13:36.567934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.735  qpair failed and we were unable to recover it.
00:32:20.735  [2024-12-10 00:13:36.568109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.735  [2024-12-10 00:13:36.568142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.735  qpair failed and we were unable to recover it.
00:32:20.735  [2024-12-10 00:13:36.568332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.735  [2024-12-10 00:13:36.568366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.735  qpair failed and we were unable to recover it.
00:32:20.735  [2024-12-10 00:13:36.568631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:20.735  [2024-12-10 00:13:36.568664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:20.735  qpair failed and we were unable to recover it.
00:32:21.013  [2024-12-10 00:13:36.568855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.013  [2024-12-10 00:13:36.568888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.013  qpair failed and we were unable to recover it.
00:32:21.013  [2024-12-10 00:13:36.569060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.013  [2024-12-10 00:13:36.569094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.013  qpair failed and we were unable to recover it.
00:32:21.013  [2024-12-10 00:13:36.569286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.013  [2024-12-10 00:13:36.569320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.013  qpair failed and we were unable to recover it.
00:32:21.013  [2024-12-10 00:13:36.569511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.013  [2024-12-10 00:13:36.569544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.013  qpair failed and we were unable to recover it.
00:32:21.013  [2024-12-10 00:13:36.569721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.013  [2024-12-10 00:13:36.569759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.013  qpair failed and we were unable to recover it.
00:32:21.013  [2024-12-10 00:13:36.569938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.013  [2024-12-10 00:13:36.569970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.013  qpair failed and we were unable to recover it.
00:32:21.013  [2024-12-10 00:13:36.570079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.013  [2024-12-10 00:13:36.570112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.013  qpair failed and we were unable to recover it.
00:32:21.013  [2024-12-10 00:13:36.570363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.013  [2024-12-10 00:13:36.570397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.013  qpair failed and we were unable to recover it.
00:32:21.013  [2024-12-10 00:13:36.570578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.013  [2024-12-10 00:13:36.570611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.013  qpair failed and we were unable to recover it.
00:32:21.013  [2024-12-10 00:13:36.570745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.570779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.570966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.570998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.571238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.571272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.571448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.571481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.571736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.571770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.571989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.572022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.572199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.572234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.572419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.572453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.572569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.572601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.572871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.572905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.573112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.573145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.573350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.573384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.573570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.573603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.573779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.573812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.574004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.574038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.574302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.574336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.574455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.574487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.574612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.574645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.574760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.574793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.574967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.575000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.575183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.575217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.575395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.575428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.575664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.575738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.575888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.575927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.576056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.576090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.576347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.576384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.576630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.576663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.576907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.576940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.577150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.577196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.577319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.577353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.577525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.577558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.577671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.014  [2024-12-10 00:13:36.577704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.014  qpair failed and we were unable to recover it.
00:32:21.014  [2024-12-10 00:13:36.577899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.577933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.578184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.578218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.578496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.578529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.578731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.578765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.578954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.578988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.579121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.579155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.579271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.579305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.579435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.579468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.579655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.579688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.579881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.579915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.580093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.580127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.580333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.580369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.580494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.580527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.580790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.580824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.581009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.581043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.581232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.581266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.581395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.581428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.581666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.581706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.581900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.581934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.582110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.582144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.582457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.582491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.582738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.582771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.583036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.583069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.583208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.583243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.583487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.583520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.583736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.583769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.583908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.583943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.584054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.584094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.584197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.584232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.584502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.584535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.584707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.584740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.015  [2024-12-10 00:13:36.584920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.015  [2024-12-10 00:13:36.584954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.015  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.585068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.585100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.585341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.585376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.585639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.585672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.585888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.585920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.586103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.586137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.586277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.586310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.586501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.586535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.586723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.586757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.586940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.586973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.587155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.587198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.587373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.587406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.587524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.587558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.587744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.587778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.588052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.588086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.588192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.588226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.588516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.588550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.588783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.588816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.588928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.588962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.589135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.589184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.589365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.589399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.589607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.589640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.589814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.589847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.590052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.590085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.590262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.590296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.590494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.590527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.590703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.590736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.590929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.590963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.591146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.591188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.591433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.591467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.591573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.591606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.591732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.591765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.016  [2024-12-10 00:13:36.591895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.016  [2024-12-10 00:13:36.591929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.016  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.592101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.592134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.592385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.592419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.592604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.592638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.592772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.592804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.592988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.593021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.593128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.593162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.593316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.593350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.593588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.593620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.593759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.593794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.594060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.594094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.594347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.594382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.594554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.594588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.594858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.594891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.595084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.595116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.595382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.595418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.595537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.595571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.595770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.595803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.595914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.595948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.596072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.596102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.596291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.596325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.596518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.596551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.596723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.596763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.596959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.596992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.597188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.597223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.597410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.597445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.597638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.597670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.597882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.597916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.598161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.598207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.598451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.598485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.598588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.598621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.017  [2024-12-10 00:13:36.598816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.017  [2024-12-10 00:13:36.598850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.017  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.599043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.599076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.599202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.599237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.599443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.599476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.599669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.599702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.599839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.599873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.600009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.600042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.600162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.600203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.600416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.600450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.600716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.600749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.600996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.601029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.601188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.601222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.601341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.601375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.601602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.601636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.601813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.601847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.602035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.602068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.602248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.602282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.602391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.602423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.602557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.602590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.602879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.602912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.603016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.603050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.603222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.603257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.603375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.603409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.603655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.603689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.603791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.603824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.603964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.603998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.604264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.604299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.604426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.604459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.604671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.604704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.604966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.604999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.605123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.605155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.605432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.605466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.018  [2024-12-10 00:13:36.605677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.018  [2024-12-10 00:13:36.605715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.018  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.605844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.605877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.606052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.606086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.606337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.606372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.606554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.606587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.606781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.606814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.606938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.606971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.607102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.607134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.607321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.607355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.607594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.607628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.607739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.607771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.607887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.607920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.608105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.608139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.608327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.608361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.608481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.608515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.608634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.608667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.608908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.608941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.609051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.609084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.609278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.609313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.609439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.609472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.609661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.609694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.609961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.609994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.610099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.610132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.610318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.610353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.610591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.610625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.610865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.610898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.611075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.611108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.611244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.611285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.019  qpair failed and we were unable to recover it.
00:32:21.019  [2024-12-10 00:13:36.611503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.019  [2024-12-10 00:13:36.611537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.611654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.611687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.611893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.611926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.612176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.612210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.612400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.612433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.612604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.612637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.612745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.612778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.613045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.613078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.613327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.613362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.613628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.613661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.613853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.613887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.614013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.614047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.614242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.614276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.614466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.614500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.614618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.614650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.614774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.614808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.614936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.614970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.615161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.615203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.615491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.615524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.615654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.615687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.615927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.615960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.616156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.616198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.616464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.616497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.616635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.616669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.616843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.616876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.616996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.617029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.617135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.617261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.617514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.617547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.617791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.617824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.618091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.618124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.618314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.618348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.618546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.618579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.618771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.020  [2024-12-10 00:13:36.618803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.020  qpair failed and we were unable to recover it.
00:32:21.020  [2024-12-10 00:13:36.618975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.619008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.619246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.619282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.619544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.619579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.619792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.619825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.619934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.619967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.620189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.620223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.620438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.620471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.620581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.620620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.620801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.620836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.621074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.621107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.621286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.621320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.621460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.621494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.621738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.621771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.622014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.622048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.622155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.622198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.622405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.622438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.622564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.622597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.622785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.622818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.623056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.623090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.623211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.623246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.623360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.623392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.623532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.623565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.623738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.623771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.623962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.623995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.624188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.624222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.624343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.624378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.624498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.624531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.624705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.624739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.624915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.624948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.625134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.625197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.625391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.625424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.625667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.625700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.021  qpair failed and we were unable to recover it.
00:32:21.021  [2024-12-10 00:13:36.625815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.021  [2024-12-10 00:13:36.625848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.626022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.626056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.626237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.626277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.626394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.626427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.626649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.626682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.626909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.626941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.627189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.627223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.627347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.627381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.627613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.627646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.627752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.627784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.627895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.627929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.628196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.628230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.628424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.628457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.628583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.628616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.628805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.628838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.629028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.629062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.629225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.629262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.629386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.629419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.629595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.629628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.629915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.629948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.630085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.630118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.630336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.630370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.630633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.630666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.630873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.630907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.631083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.631116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.631250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.631296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.631501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.631535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.631729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.631762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.632006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.632040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.632249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.632284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.632553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.632587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.632857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.632889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.633000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.022  [2024-12-10 00:13:36.633033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.022  qpair failed and we were unable to recover it.
00:32:21.022  [2024-12-10 00:13:36.633269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.633304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.633519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.633553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.633820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.633854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.634047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.634080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.634259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.634293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.634479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.634511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.634704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.634737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.634982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.635016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.635270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.635304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.635499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.635534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.635671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.635710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.635916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.635950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.636148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.636192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.636440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.636474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.636600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.636633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.636758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.636792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.637055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.637088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.637379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.637414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.637549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.637583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.637688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.637721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.637981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.638014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.638210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.638244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.638370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.638403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.638510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.638542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.638657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.638690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.638804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.638837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.639024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.639058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.639232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.639268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.639466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.639499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.639683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.639716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.639886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.639919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.640038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.640072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.023  qpair failed and we were unable to recover it.
00:32:21.023  [2024-12-10 00:13:36.640193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.023  [2024-12-10 00:13:36.640227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.640353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.640386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.640491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.640524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.640649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.640682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.640858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.640892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.640997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.641036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.641147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.641192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.641387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.641420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.641602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.641635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.641827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.641860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.642045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.642078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.642342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.642376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.642494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.642527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.642773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.642806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.642997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.643031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.643159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.643214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.643422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.643455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.643714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.643748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.643924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.643958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.644090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.644124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.644248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.644282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.644549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.644583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.644765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.644798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.644980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.645014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.645190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.645225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.645332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.645365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.645570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.645603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.645803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.645837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.646039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.024  [2024-12-10 00:13:36.646072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.024  qpair failed and we were unable to recover it.
00:32:21.024  [2024-12-10 00:13:36.646361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.646395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.646541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.646574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.646681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.646715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.646898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.646932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.647046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.647079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.647277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.647312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.647576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.647609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.647795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.647828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.647963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.647997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.648202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.648236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.648364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.648397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.648653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.648687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.648878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.648912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.649084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.649118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.649254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.649288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.649481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.649514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.649756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.649789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.649907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.649947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.650137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.650178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.650368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.650402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.650506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.650540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.650657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.650691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.650813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.650846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.650964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.650997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.651131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.651164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.651458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.651492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.651672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.651705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.651829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.651862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.652033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.652066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.652278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.652313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.652439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.652473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.652598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.652631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.025  [2024-12-10 00:13:36.652759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.025  [2024-12-10 00:13:36.652792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.025  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.652986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.653019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.653293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.653328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.653470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.653503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.653708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.653740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.653923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.653957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.654158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.654199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.654436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.654469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.654642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.654676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.654852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.654885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.655127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.655160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.655300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.655334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.655533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.655566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.655812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.655846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.656032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.656066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.656304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.656340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.656541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.656574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.656746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.656778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.656984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.657017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.657143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.657185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.657365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.657398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.657574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.657607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.657803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.657835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.658079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.658112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.658237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.658271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.658510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.658545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.658795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.658830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.659044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.659077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.659274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.659309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.659569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.659602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.659773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.659807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.660047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.660080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.660253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.026  [2024-12-10 00:13:36.660288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.026  qpair failed and we were unable to recover it.
00:32:21.026  [2024-12-10 00:13:36.660546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.660579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.660795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.660828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.661023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.661057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.661254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.661289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.661406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.661439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.661559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.661592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.661775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.661808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.662083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.662116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.662403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.662439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.662565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.662598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.662792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.662826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.663094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.663127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.663358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.663392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.663571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.663604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.663805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.663837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.664016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.664049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.664226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.664260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.664370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.664403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.664589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.664623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.664803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.664837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.665027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.665067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.665199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.665232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.665404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.665436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.665688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.665722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.665840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.665873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.665983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.666016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.666205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.666239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.666508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.666541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.666728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.666762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.666949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.666982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.667149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.667208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.667418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.667451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.027  [2024-12-10 00:13:36.667639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.027  [2024-12-10 00:13:36.667673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.027  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.667794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.667827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.667949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.667983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.668188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.668224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.668396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.668429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.668711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.668745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.668932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.668966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.669157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.669199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.669438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.669472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.669604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.669638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.669823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.669856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.670062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.670096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.670284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.670319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.670425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.670458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.670571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.670604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.670781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.670815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.670942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.670976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.671163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.671206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.671492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.671525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.671721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.671754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.671935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.671969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.672142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.672186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.672370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.672404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.672546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.672579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.672771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.672805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.673095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.673129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.673330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.673365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.673627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.673660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.673900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.673933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.674122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.674160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.674357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.674390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.674601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.674634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.674830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.028  [2024-12-10 00:13:36.674864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.028  qpair failed and we were unable to recover it.
00:32:21.028  [2024-12-10 00:13:36.675037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.675070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.675208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.675244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.675371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.675404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.675620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.675653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.675949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.675982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.676175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.676209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.676391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.676425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.676651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.676685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.676817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.676851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.677041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.677075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.677323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.677359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.677530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.677563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.677751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.677784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.677967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.678000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.678242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.678279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.678548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.678582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.678774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.678808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.678931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.678964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.679142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.679186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.679446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.679480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.679673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.679706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.679945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.679978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.680248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.680283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.680522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.680562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.680752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.680785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.680982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.681015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.681217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.681252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.681434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.681468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.681585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.681618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.029  [2024-12-10 00:13:36.681809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.029  [2024-12-10 00:13:36.681842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.029  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.682111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.682144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.682283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.682317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.682496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.682528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.682711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.682744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.682853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.682887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.683023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.683057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.683230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.683265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.683403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.683437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.683579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.683612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.683802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.683836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.683965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.683998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.684261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.684297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.684489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.684521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.684637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.684672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.684787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.684821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.685004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.685037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.685220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.685254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.685428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.685461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.685667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.685700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.685891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.685925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.686135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.686175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.686427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.686461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.686573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.686607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.686726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.686760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.686933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.686966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.687153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.687195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.687326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.687360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.687579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.687612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.687734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.687768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.687953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.687986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.688089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.688122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.030  qpair failed and we were unable to recover it.
00:32:21.030  [2024-12-10 00:13:36.688253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.030  [2024-12-10 00:13:36.688288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.688531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.688563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.688744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.688777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.688892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.688937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.689064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.689097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.689358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.689393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.689506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.689538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.689776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.689809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.689947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.689980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.690156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.690198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.690376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.690409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.690586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.690620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.690810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.690844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.691032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.691065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.691211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.691244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.691427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.691461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.691652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.691685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.691823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.691857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.692046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.692080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.692199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.692233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.692474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.692507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.692624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.692658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.692782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.692816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.692941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.692975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.693190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.693224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.693466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.693500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.693632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.693665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.693845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.693878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.693982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.694016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.694208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.694243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.694429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.694467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.694575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.694609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.694742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.031  [2024-12-10 00:13:36.694776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.031  qpair failed and we were unable to recover it.
00:32:21.031  [2024-12-10 00:13:36.694909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.694943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.695067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.695100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.695222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.695257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.695439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.695473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.695713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.695746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.695966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.695999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.696184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.696218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.696332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.696366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.696541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.696574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.696860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.696894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.697154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.697195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.697397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.697431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.697607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.697641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.697757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.697790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.697927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.697962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.698104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.698138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.698345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.698380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.698496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.698530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.698654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.698689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.698799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.698833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.698976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.699011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.699253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.699289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.699467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.699500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.699684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.699718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.699902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.699936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.700135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.700177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.700297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.700332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.700552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.700586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.700777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.700812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.701003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.701036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.701139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.701180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.701379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.701412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.032  [2024-12-10 00:13:36.701533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.032  [2024-12-10 00:13:36.701566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.032  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.701691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.701724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.701897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.701931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.702056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.702089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.702214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.702249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.702431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.702465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.702586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.702627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.702744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.702778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.702950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.702984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.703228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.703264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.703384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.703418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.703596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.703629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.703825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.703859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.704044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.704077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.704268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.704302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.704506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.704540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.704658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.704691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.704875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.704909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.705024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.705057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.705295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.705329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.705539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.705572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.705753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.705787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.705976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.706010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.706252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.706287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.706462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.706495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.706607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.706641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.706760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.706793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.706918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.706952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.707074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.707108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.707227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.707262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.707462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.707496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.707733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.707767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.707874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.033  [2024-12-10 00:13:36.707908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.033  qpair failed and we were unable to recover it.
00:32:21.033  [2024-12-10 00:13:36.708087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.708126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.708403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.708439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.708565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.708599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.708717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.708750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.708872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.708905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.709122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.709157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.709292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.709327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.709431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.709464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.709653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.709686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.709820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.709853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.709990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.710024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.710203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.710239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.710347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.710381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.710587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.710621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.710813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.710847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.710985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.711019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.711124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.711157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.711288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.711321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.711513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.711546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.711736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.711770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.711962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.711996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.712248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.712283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.713192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.713241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.713527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.713561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.713747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.713783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.714026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.714060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.714260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.714294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.714499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.714535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.714729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.714765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.715029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.715064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.715244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.715278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.715514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.715550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.034  [2024-12-10 00:13:36.715672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.034  [2024-12-10 00:13:36.715705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.034  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.715952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.715986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.716109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.716142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.716401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.716434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.716557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.716590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.716716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.716749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.716869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.716901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.717109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.717142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.717346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.717379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.717571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.717611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.717740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.717774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.717961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.717994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.718164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.718209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.718346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.718380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.718503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.718536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.718722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.718755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.719018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.719052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.719204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.719240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.719361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.719395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.719513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.719547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.719666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.719699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.719886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.719920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.720115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.720148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.720408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.720442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.720565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.720598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.720797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.720830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.720955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.720989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.721108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.721142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.721272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.721306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.035  [2024-12-10 00:13:36.721423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.035  [2024-12-10 00:13:36.721458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.035  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.721585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.721620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.721846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.721881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.722053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.722086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.722221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.722257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.722385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.722418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.722595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.722629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.722819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.722855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.722984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.723017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.723281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.723316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.723509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.723543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.723661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.723694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.723822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.723857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.723981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.724013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.724138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.724200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.724381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.724414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.724519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.724553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.724749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.724785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.724978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.725010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.725119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.725155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.725367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.725401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.725531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.725566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.725675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.725709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.725815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.725848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.726023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.726056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.726185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.726219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.726404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.726437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.726554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.726588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.726731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.726764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.726968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.727003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.727248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.727283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.727465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.727499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.036  qpair failed and we were unable to recover it.
00:32:21.036  [2024-12-10 00:13:36.727689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.036  [2024-12-10 00:13:36.727722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.727916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.727948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.728087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.728122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.728258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.728294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.728424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.728457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.728638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.728671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.728924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.728957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.729148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.729191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.729368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.729401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.729524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.729558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.729736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.729769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.729947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.729981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.730186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.730221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.730414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.730449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.730563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.730597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.730797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.730831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.731028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.731077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.731288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.731323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.731432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.731465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.731661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.731696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.731880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.731914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.732039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.732072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.732266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.732300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.732435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.732469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.732672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.732705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.732897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.732930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.733045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.733079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.733266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.733300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.733413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.733445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.733686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.733720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.733921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.733954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.734221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.734255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.037  [2024-12-10 00:13:36.734456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.037  [2024-12-10 00:13:36.734491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.037  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.734754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.734787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.734971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.735005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.735131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.735164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.735305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.735339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.735470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.735503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.735614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.735648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.735839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.735872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.736062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.736095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.736270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.736305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.736484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.736517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.736641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.736674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.736853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.736889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.737009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.737042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.737301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.737336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.737515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.737548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.737725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.737758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.737887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.737921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.738027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.738061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.738182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.738217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.738324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.738358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.738479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.738513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.738645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.738679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.738810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.738843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.738967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.739001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.739152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.739197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.739305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.739338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.739458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.739492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.739602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.739637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.739881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.739914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.740038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.740072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.740248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.740283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.740458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.038  [2024-12-10 00:13:36.740492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.038  qpair failed and we were unable to recover it.
00:32:21.038  [2024-12-10 00:13:36.740734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.740768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.741012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.741046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.741241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.741276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.741462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.741496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.741684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.741719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.741975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.742009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.742201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.742236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.742349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.742383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.742575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.742608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.742725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.742758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.742933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.742968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.743081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.743114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.743243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.743278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.743534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.743567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.743807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.743841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.743965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.743999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.744212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.744248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.744434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.744468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.744591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.744625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.744798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.744837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.744959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.744993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.745210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.745246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.745376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.745410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.745648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.745683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.745976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.746009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.746223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.746258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.746446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.746479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.746763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.746797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.746990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.747023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.747152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.747192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.747324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.747366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.747539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.747572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.039  [2024-12-10 00:13:36.747755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.039  [2024-12-10 00:13:36.747789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.039  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.747933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.747967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.748139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.748194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.748319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.748353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.748528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.748562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.748825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.748858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.749047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.749081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.749195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.749230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.749357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.749391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.749604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.749638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.749829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.749862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.749985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.750018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.750140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.750184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.750291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.750325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.750510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.750544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.750761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.750796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.751013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.751047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.751335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.751372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.751552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.751585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.751767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.751801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.752003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.752037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.752306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.752340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.752524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.752558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.752735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.752768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.753010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.753043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.753229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.753264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.753381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.753416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.753534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.753568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.753836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.753874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.040  [2024-12-10 00:13:36.754060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.040  [2024-12-10 00:13:36.754094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.040  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.754294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.754328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.754511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.754545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.754808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.754841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.755099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.755132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.755340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.755375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.755479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.755511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.755729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.755763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.755997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.756032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.756146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.756198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.756417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.756451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.756647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.756680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.756797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.756832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.757078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.757111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.757319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.757353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.757471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.757504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.757746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.757780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.757952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.757986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.758187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.758221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.758403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.758436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.758618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.758651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.758831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.758865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.758993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.759027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.759131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.759178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.759356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.759390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.759664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.759698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.759876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.759915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.760155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.760199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.760400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.760433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.760566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.760599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.760811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.760845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.761022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.761055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.761224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.041  [2024-12-10 00:13:36.761260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.041  qpair failed and we were unable to recover it.
00:32:21.041  [2024-12-10 00:13:36.761463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.761496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.761687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.761720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.761854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.761888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.762062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.762097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.762270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.762305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.762424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.762458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.762651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.762683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.762870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.762905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.763097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.763131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.763369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.763403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.763579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.763613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.763728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.763761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.763936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.763976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.764187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.764220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.764432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.764465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.764655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.764688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.764803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.764836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.765083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.765117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.765240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.765274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.765542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.765575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.765753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.765787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.766037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.766070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.766252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.766302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.766506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.766540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.766778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.766810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.766984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.767017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.767279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.767314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.767493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.767526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.767662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.767696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.767892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.767926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.768046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.768078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.768266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.768301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.042  qpair failed and we were unable to recover it.
00:32:21.042  [2024-12-10 00:13:36.768434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.042  [2024-12-10 00:13:36.768468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.768660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.768693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.768804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.768843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.768984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.769018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.769133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.769177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.769310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.769343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.769454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.769489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.769607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.769641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.769837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.769870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.770115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.770147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.770352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.770387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.770571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.770606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.770776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.770810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.770981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.771015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.771202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.771237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.771371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.771405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.771585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.771620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.771741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.771775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.772018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.772051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.772236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.772279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.772383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.772417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.772603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.772636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.772831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.772865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.773057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.773092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.773301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.773337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.773464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.773499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.773741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.773775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.773978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.774012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.774129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.774162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.774430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.774475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.774716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.774751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.774948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.774984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.775159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.043  [2024-12-10 00:13:36.775204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.043  qpair failed and we were unable to recover it.
00:32:21.043  [2024-12-10 00:13:36.775329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.775362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.775536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.775569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.775687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.775722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.775980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.776015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.776136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.776179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.776301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.776335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.776462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.776496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.776601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.776635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.776804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.776837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.776948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.776981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.777114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.777148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.777271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.777305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.777479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.777513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.777757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.777791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.777913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.777947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.778143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.778186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.778375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.778409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.778582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.778615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.778719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.778753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.778944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.778978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.779222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.779258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.779451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.779484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.779594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.779629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.779806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.779840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.780023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.780057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.780198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.780233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.780472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.780506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.780678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.780711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.780883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.780918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.781044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.781077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.781328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.781363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.781570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.781604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.044  [2024-12-10 00:13:36.781778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.044  [2024-12-10 00:13:36.781812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.044  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.781994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.782028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.782212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.782246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.782380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.782414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.782590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.782624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.782826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.782864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.783051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.783084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.783208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.783243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.783370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.783402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.783591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.783625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.783745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.783778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.783978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.784013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.784202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.784238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.784432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.784465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.784638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.784672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.784843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.784877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.785013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.785047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.785302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.785338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.785515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.785548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.785667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.785702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.785942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.785975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.786187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.786223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.786426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.786460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.786699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.786732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.786909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.786945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.787070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.787103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.787244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.787279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.787465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.787499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.787618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.787650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.787832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.787866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.787972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.788005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.788110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.788143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.788370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.788404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.788648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.788681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.788875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.045  [2024-12-10 00:13:36.788908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.045  qpair failed and we were unable to recover it.
00:32:21.045  [2024-12-10 00:13:36.789088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.789121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.789375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.789411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.789621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.789654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.789904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.789937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.790127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.790161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.790371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.790404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.790595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.790628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.790739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.790773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.790962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.790998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.791149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.791192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.791434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.791467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.791718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.791752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.791871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.791905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.792091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.792125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.792393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.792428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.792619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.792653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.792846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.792880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.793002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.793039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.793188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.793222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.793463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.793497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.793709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.793742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.793869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.793904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.794007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.794040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.794179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.794214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.794474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.794508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.794708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.794743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.794981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.795014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.795145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.795188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.795307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.795340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.795584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.795617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.795880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.795914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.796111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.046  [2024-12-10 00:13:36.796145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.046  qpair failed and we were unable to recover it.
00:32:21.046  [2024-12-10 00:13:36.796415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.796448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.796575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.796609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.796741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.796775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.796954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.796988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.797258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.797293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.797542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.797576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.797681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.797719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.797976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.798011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.798286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.798321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.798522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.798555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.798676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.798710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.798834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.798868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.799067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.799102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.799298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.799334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.799507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.799549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.799755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.799789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.799966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.799999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.800191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.800225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.800420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.800454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.800646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.800679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.800802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.800836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.801080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.801113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.801294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.801340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.801527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.801561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.801733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.801767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.801905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.801938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.802128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.802162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.802355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.802389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.802655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.802697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.802810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.802844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.803034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.803067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.803336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.803371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.803572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.803605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.803727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.803759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.803947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.803981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.047  qpair failed and we were unable to recover it.
00:32:21.047  [2024-12-10 00:13:36.804150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.047  [2024-12-10 00:13:36.804193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.804385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.804419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.804545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.804579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.804818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.804851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.804985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.805018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.805189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.805223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.805408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.805442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.805683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.805716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.805849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.805883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.806012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.806046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.806178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.806211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.806409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.806442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.806557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.806592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.806708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.806741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.806938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.806972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.807079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.807113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.807393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.807428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.807616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.807649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.807820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.807854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.807955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.807986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.808194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.808228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.808409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.808442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.808551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.808584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.808778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.808811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.808990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.809024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.809195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.809229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.809504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.809538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.809735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.809770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.809949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.809982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.810094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.810128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.810312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.810347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.810523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.810556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.810744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.810778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.810878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.810911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.811098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.811131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.811240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.811274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.811460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.811494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.048  [2024-12-10 00:13:36.811693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.048  [2024-12-10 00:13:36.811726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.048  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.811898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.811931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.812120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.812159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.812369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.812403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.812517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.812550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.812760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.812794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.812984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.813017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.813190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.813225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.813398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.813431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.813551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.813584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.813799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.813833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.813946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.813979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.814103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.814135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.814316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.814350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.814534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.814567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.814739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.814771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.814986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.815020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.815144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.815185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.815381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.815413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.815532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.815565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.815739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.815772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.815942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.815975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.816226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.816261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.816467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.816500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.816622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.816654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.816828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.816862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.817104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.817138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.817326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.817360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.817607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.817640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.817912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.817945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.818099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.818132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.818279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.818314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.818569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.818602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.818773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.818806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.818922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.818956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.819165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.819208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.819452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.819485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.819612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.049  [2024-12-10 00:13:36.819645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.049  qpair failed and we were unable to recover it.
00:32:21.049  [2024-12-10 00:13:36.819816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.819849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.820035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.820069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.820355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.820391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.820508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.820541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.820785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.820818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.820952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.820991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.821177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.821212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.821431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.821464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.821719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.821752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.821993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.822026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.822232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.822266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.822438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.822471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.822605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.822638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.822875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.822908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.823177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.823211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.823321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.823354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.823545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.823578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.823758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.823791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.823975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.824009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.824134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.824187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.824456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.824490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.824774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.824807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.824982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.825015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.825138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.825182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.825381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.825414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.825680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.825713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.825901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.825934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.826124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.826157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.826360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.826394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.826589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.826622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.826796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.826829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.827017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.827051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.827186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.827225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.827427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.050  [2024-12-10 00:13:36.827461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.050  qpair failed and we were unable to recover it.
00:32:21.050  [2024-12-10 00:13:36.827646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.827679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.827839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.827873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.828073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.828106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.828229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.828263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.828458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.828491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.828752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.828786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.829001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.829034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.829155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.829200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.829325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.829359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.829488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.829521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.829699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.829733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.829850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.829884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.830128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.830162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.830383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.830416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.830535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.830568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.830682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.830715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.830893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.830926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.831189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.831228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.831410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.831443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.831632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.831665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.831787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.831820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.832050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.832084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.832200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.832234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.832346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.832379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.832490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.832524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.832705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.832738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.832924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.832958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.833175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.833210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.833457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.833490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.833720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.833753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.833892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.833925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.834042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.834075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.834211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.834245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.834466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.834498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.834620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.834653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.834894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.834927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.835040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.835074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.835252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.051  [2024-12-10 00:13:36.835287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.051  qpair failed and we were unable to recover it.
00:32:21.051  [2024-12-10 00:13:36.835590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.835623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.835871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.835910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.836132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.836174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.836364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.836398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.836659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.836691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.836930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.836963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.837226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.837261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.837395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.837427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.837607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.837641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.837901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.837934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.838110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.838144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.838263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.838297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.838550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.838582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.838788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.838821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.838954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.838987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.839259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.839295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.839481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.839513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.839774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.839808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.839995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.840028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.840159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.840210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.840401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.840434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.840575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.840608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.840793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.840826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.840935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.840969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.841220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.841254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.841496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.841529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.841711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.841744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.841917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.841949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.842236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.842276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.842453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.842487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.842668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.842701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.842839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.842871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.843141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.843183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.843363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.843397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.843637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.843669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.843909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.843942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.844117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.844151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.844410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.052  [2024-12-10 00:13:36.844443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.052  qpair failed and we were unable to recover it.
00:32:21.052  [2024-12-10 00:13:36.844652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.844686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.844863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.844896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.845021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.845053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.845182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.845216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.845338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.845373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.845647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.845680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.845926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.845959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.846222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.846258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.846429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.846463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.846655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.846688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.846933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.846966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.847138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.847180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.847465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.847499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.847741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.847775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.847978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.848011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.848197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.848232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.848437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.848470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.848654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.848688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.848820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.848854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.849112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.849144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.849348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.849382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.849560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.849593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.849720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.849754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.849925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.849958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.850086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.850119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.850385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.850420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.850630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.850664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.850778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.850810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.850989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.851022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.851129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.851163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.053  [2024-12-10 00:13:36.851444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.053  [2024-12-10 00:13:36.851477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.053  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.851650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.851689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.851908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.851941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.852209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.852244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.852443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.852476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.852716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.852749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.852995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.853028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.853271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.853305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.853428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.853461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.853590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.853624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.853798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.853831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.854045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.854078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.854206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.854240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.854418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.854451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.854715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.854747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.855018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.855052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.855254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.855289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.855408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.855441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.855679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.855711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.855900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.855934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.856150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.856193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.856385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.856418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.856546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.856579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.856862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.856895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.857138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.857182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.857398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.857430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.857558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.857592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.857778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.857811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.857981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.858019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.858211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.858246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.335  [2024-12-10 00:13:36.858367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.335  [2024-12-10 00:13:36.858400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.335  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.858653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.858687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.858857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.858890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.859060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.859093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.859283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.859318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.859504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.859537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.859718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.859751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.859938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.859972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.860142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.860186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.863408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.863446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.863686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.863719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.863851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.863884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.864130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.864163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.864388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.864421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.864610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.864643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.864760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.864794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.864978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.865011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.865181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.865214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.865423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.865456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.865633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.865666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.865875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.865908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.866082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.866116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.866324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.866359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.866538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.866572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.866860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.866893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.867079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.867112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.867315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.867349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.867484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.867517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.867631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.867665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.867856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.867890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.868097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.868130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.868274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.868308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.336  [2024-12-10 00:13:36.868572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.336  [2024-12-10 00:13:36.868606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.336  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.868719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.868753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.868938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.868972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.869093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.869127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.869313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.869346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.869618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.869651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.869840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.869873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.869993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.870032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.870150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.870192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.870393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.870426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.870598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.870632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.870891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.870925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.871051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.871084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.871208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.871243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.871491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.871523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.871709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.871742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.871984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.872018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.872131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.872165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.872348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.872381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.872646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.872679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.872870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.872904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.873174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.873207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.873397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.873430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.873549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.873582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.873721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.873754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.873877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.873911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.874093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.874125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.874319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.874353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.874467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.874500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.874782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.874815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.874989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.875022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.875147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.875191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.875401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.337  [2024-12-10 00:13:36.875434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.337  qpair failed and we were unable to recover it.
00:32:21.337  [2024-12-10 00:13:36.875611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.875644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.875773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.875806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.876054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.876088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.876353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.876389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.876579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.876612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.876728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.876761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.877014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.877047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.877242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.877276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.877459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.877493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.877697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.877730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.877915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.877948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.878141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.878181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.878361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.878393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.878517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.878550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.878736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.878770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.878964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.878998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.879185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.879219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.879351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.879384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.879645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.879679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.879790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.879823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.879998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.880032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.880215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.880249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.880445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.880479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.880658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.880691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.880892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.880925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.881115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.881148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.881276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.881309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.881501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.881534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.881774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.881808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.338  qpair failed and we were unable to recover it.
00:32:21.338  [2024-12-10 00:13:36.882053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.338  [2024-12-10 00:13:36.882086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.882258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.882293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.882511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.882544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.882725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.882759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.882887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.882920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.883109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.883142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.883334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.883368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.883485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.883518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.883781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.883814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.883987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.884020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.884204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.884239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.884363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.884396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.884570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.884603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.884790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.884829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.884946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.884979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.885091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.885125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.885239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.885273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.885391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.885424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.885542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.885575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.885740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.885773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.885875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.885908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.886034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.886068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.886297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.886332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.886580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.886614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.886715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.886748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.886931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.886964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.887069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.887102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.887304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.887339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.887465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.887498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.887669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.887702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.887892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.339  [2024-12-10 00:13:36.887925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.339  qpair failed and we were unable to recover it.
00:32:21.339  [2024-12-10 00:13:36.888039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.888071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.888310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.888344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.888584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.888617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.888818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.888850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.889102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.889135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.889325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.889360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.889494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.889527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.889716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.889749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.889937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.889970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.890237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.890272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.890457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.890489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.890677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.890710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.890963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.890996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.891189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.891224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.891346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.891380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.891573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.891606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.891779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.891812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.892006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.892040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.892158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.892204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.892466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.892499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.892716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.892750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.892936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.892970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.893178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.340  [2024-12-10 00:13:36.893212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.340  qpair failed and we were unable to recover it.
00:32:21.340  [2024-12-10 00:13:36.893486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.893524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.893642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.893675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.893805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.893839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.894028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.894061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.894236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.894272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.894516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.894548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.894756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.894789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.894895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.894928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.895210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.895244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.895428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.895462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.895665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.895698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.895831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.895864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.895979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.896012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.896267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.896302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.896438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.896472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.896664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.896697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.896818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.896852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.897040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.897073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.897200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.897235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.897368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.897402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.897613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.897646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.897761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.897794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.898061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.898095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.898212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.898246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.898422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.898456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.898699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.898732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.898938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.898971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.899140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.899186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.899372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.899405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.899607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.899640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.341  qpair failed and we were unable to recover it.
00:32:21.341  [2024-12-10 00:13:36.899764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.341  [2024-12-10 00:13:36.899798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.899979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.900012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.900184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.900218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.900469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.900502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.900689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.900722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.900832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.900865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.901052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.901085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.901295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.901328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.901569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.901602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.901728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.901761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.901957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.901990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.902204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.902239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.902381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.902415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.902623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.902657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.902841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.902873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.903113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.903147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.903433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.903468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.903603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.903637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.903900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.903933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.904067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.904100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.904299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.904333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.904526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.904558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.904743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.904776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.905035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.905068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.905277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.905311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.905442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.905475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.905601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.905634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.905873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.905906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.906041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.906074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.906320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.342  [2024-12-10 00:13:36.906355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.342  qpair failed and we were unable to recover it.
00:32:21.342  [2024-12-10 00:13:36.906544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.906577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.906695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.906728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.906904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.906937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.907052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.907085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.907279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.907313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.907419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.907453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.907643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.907676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.907847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.907880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.908119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.908159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.908347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.908380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.908660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.908693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.908869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.908901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.909093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.909127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.909240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.909275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.909449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.909481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.909665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.909699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.909901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.909935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.910207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.910243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.910419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.910453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.910691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.910723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.910986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.911020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.911207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.911241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.911493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.911527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.911636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.911670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.911844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.911878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.912163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.912204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.912383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.912416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.912664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.912697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.912897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.912931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.343  [2024-12-10 00:13:36.913108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.343  [2024-12-10 00:13:36.913141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.343  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.913329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.913364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.913493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.913526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.913636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.913669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.913860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.913894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.914115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.914148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.914279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.914319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.914540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.914574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.914768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.914801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.915043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.915076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.915252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.915287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.915413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.915447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.915553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.915587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.915837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.915870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.915982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.916015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.916219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.916252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.916501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.916534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.916668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.916701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.916819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.916852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.917065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.917098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.917250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.917285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.917411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.917445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.917572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.917606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.917872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.917905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.918019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.918052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.918245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.918279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.918390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.918423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.918536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.918569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.918677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.918711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.918885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.918919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.344  qpair failed and we were unable to recover it.
00:32:21.344  [2024-12-10 00:13:36.919109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.344  [2024-12-10 00:13:36.919142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.919338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.919373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.919508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.919542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.919783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.919818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.919952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.919985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.920226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.920261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.920465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.920498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.920628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.920662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.920904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.920937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.921054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.921087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.921257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.921292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.921507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.921539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.921736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.921770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.921904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.921938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.922062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.922095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.922302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.922337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.922535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.922568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.922743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.922782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.922920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.922953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.923070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.923103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.923224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.923258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.923433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.923469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.923712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.923745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.923865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.923899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.924135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.924178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.924483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.924516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.924740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.924775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.924962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.924995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.925204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.925239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.925417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.345  [2024-12-10 00:13:36.925451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.345  qpair failed and we were unable to recover it.
00:32:21.345  [2024-12-10 00:13:36.925653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.925686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.925805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.925839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.925961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.925994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.926112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.926148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.926368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.926402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.926596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.926628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.926754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.926788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.926906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.926940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.927113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.927146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.927348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.927382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.927583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.927617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.927735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.927768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.927890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.927923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.928111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.928144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.928276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.928314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.928508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.928541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.928725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.928759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.928881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.928913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.929096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.929130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.929322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.929357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.929483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.929516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.929717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.929750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.929870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.929903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.930029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.930063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.930199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.930234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.346  [2024-12-10 00:13:36.930356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.346  [2024-12-10 00:13:36.930389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.346  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.930581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.930615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.930726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.930759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.930957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.930991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.931128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.931162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.931372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.931406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.931585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.931618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.931728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.931761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.931878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.931912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.932040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.932073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.932313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.932347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.932527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.932561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.932752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.932784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.932911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.932944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.933126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.933160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.933299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.933332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.933508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.933541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.933740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.933774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.933882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.933915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.934038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.934071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.934337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.934372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.934485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.934519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.934714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.934747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.934931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.934964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.935139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.935181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.935306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.935340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.935479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.935512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.935638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.935672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.935850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.935884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.935998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.936033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.936147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.936193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.936299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.347  [2024-12-10 00:13:36.936331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.347  qpair failed and we were unable to recover it.
00:32:21.347  [2024-12-10 00:13:36.936534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.936568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.936678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.936711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.936890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.936923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.937058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.937092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.937300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.937335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.937556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.937589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.937703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.937737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.937856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.937888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.938001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.938035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.938270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.938304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.938420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.938454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.938574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.938607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.938800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.938833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.939010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.939044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.939238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.939272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.939382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.939416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.939591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.939625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.939815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.939850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.940032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.940066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.940184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.940218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.940473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.940507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.940632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.940665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.940786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.940820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.940925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.940959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.941132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.941173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.941296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.941329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.941452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.941486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.941695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.941727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.941899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.941932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.942113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.942147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.942415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.942448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.942663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.942696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.348  qpair failed and we were unable to recover it.
00:32:21.348  [2024-12-10 00:13:36.942870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.348  [2024-12-10 00:13:36.942903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.943015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.943049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.943286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.943321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.943577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.943610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.943792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.943825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.944018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.944052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.944245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.944281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.944468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.944502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.944688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.944721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.944859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.944893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.945005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.945038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.945176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.945211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.945415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.945449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.945579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.945613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.945789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.945823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.945940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.945974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.946226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.946261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.946444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.946478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.946590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.946623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.946757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.946792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.946974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.947007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.947213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.947248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.947438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.947474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.947657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.947690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.947893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.947927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.948113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.948147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.948284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.948317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.948426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.948459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.948677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.948710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.948834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.948868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.949054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.949086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.949208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.349  [2024-12-10 00:13:36.949243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.349  qpair failed and we were unable to recover it.
00:32:21.349  [2024-12-10 00:13:36.949419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.949452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.949567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.949600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.949716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.949756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.950001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.950035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.950138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.950195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.950320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.950354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.950539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.950572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.950689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.950723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.950994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.951028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.951202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.951237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.951411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.951446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.951630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.951665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.951807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.951841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.952028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.952061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.952194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.952228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.952411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.952445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.952566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.952601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.952791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.952824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.953044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.953076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.953278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.953313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.953447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.953480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.953613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.953647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.953764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.953798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.953909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.953942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.954115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.954148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.954289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.954322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.954430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.954463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.954583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.954617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.954828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.954862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.954997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.955030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.955220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.955256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.955379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.955414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.350  [2024-12-10 00:13:36.955530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.350  [2024-12-10 00:13:36.955563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.350  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.955676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.955710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.955828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.955862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.956041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.956075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.956193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.956227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.956345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.956379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.956572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.956605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.956721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.956754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.956935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.956969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.957092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.957125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.957312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.957345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.957475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.957509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.957631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.957667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.957910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.957945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.958147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.958212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.958391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.958424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.958541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.958574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.958678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.958712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.958833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.958867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.959066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.959099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.959222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.959257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.959429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.959462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.959576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.959609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.959797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.959830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.959946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.959979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.960093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.960127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.960244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.960279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.960469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.960502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.960751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.960784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.961029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.961063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.961259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.961294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.961476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.961509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.961619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.961653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.351  qpair failed and we were unable to recover it.
00:32:21.351  [2024-12-10 00:13:36.961830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.351  [2024-12-10 00:13:36.961864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.961982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.962016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.962210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.962245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.962378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.962411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.962532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.962566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.962741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.962780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.962894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.962927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.963034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.963068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.963192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.963227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.963346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.963380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.963508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.963542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.963649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.963682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.963789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.963823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.963948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.963981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.964104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.964138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.964269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.964303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.964506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.964544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.964769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.964803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.964927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.964961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.965146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.965189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.965315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.965347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.965463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.965497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.965747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.965780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.965888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.965921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.966042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.966074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.966288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.966324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.966443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.966475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.966596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.966629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.352  qpair failed and we were unable to recover it.
00:32:21.352  [2024-12-10 00:13:36.966821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.352  [2024-12-10 00:13:36.966856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.966972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.967006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.967125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.967159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.967278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.967313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.967431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.967473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.967634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.967680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.967879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.967928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.968064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.968108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.968349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.968401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.968552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.968592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.968787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.968830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.968970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.969017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.969223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.969262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.969370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.969421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.969658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.969688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.969796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.969834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.970115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.970159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.970312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.970356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.970618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.970674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.970909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.970954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.971226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.971270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.971402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.971444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.971670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.971705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.971880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.971911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.972152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.972199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.972319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.972351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.972607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.972638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.972753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.972784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.972896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.972927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.973123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.973154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.973286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.973318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.973500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.973531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.353  [2024-12-10 00:13:36.973649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.353  [2024-12-10 00:13:36.973680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.353  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.973782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.973812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.973925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.973957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.974070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.974100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.974299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.974332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.974503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.974533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.974709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.974739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.974841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.974871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.974971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.975002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.975102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.975133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.975282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.975314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.975482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.975512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.975687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.975718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.975835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.975872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.975978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.976009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.976186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.976219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.976344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.976374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.976565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.976596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.976709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.976740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.976880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.976910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.977023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.977053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.977164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.977208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.977384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.977414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.977535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.977565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.977760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.977791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.977888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.977918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.978024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.978054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.978245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.978278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.978448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.978479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.978581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.978611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.978811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.978843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.978972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.979002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.979259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.979292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.354  qpair failed and we were unable to recover it.
00:32:21.354  [2024-12-10 00:13:36.979407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.354  [2024-12-10 00:13:36.979438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.979558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.979588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.979704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.979735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.979836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.979866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.979968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.979998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.980223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.980265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.980378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.980409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.980587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.980617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.980739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.980770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.980883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.980913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.981090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.981122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.981252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.981285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.981518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.981548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.981661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.981691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.981798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.981829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.982065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.982096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.982201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.982232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.982347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.982377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.982547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.982578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.982757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.982787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.982985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.983015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.983113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.983149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.983282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.983314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.983486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.983516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.983684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.983715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.983823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.983854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.983964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.983995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.984095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.984125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.984332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.984364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.984562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.984592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.984713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.984745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.984938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.984970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.985233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.355  [2024-12-10 00:13:36.985264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.355  qpair failed and we were unable to recover it.
00:32:21.355  [2024-12-10 00:13:36.985364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.985395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.985567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.985598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.985772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.985802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.985933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.985964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.986197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.986230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.986423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.986454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.986556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.986587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.986699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.986729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.986835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.986866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.987100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.987131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.987262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.987293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.987530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.987562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.987677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.987707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.987814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.987844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.987949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.987979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.988146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.988221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.988396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.988428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.988528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.988560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.988736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.988768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.988891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.988922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.989092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.989122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.989311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.989343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.989473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.989503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.989598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.989629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.989863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.989894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.990060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.990091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.990279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.990310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.990425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.990455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.990639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.990670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.990783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.990814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.990984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.991017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.991145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.991187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.356  qpair failed and we were unable to recover it.
00:32:21.356  [2024-12-10 00:13:36.991367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.356  [2024-12-10 00:13:36.991398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.991509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.991541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.991800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.991831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.992014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.992046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.992249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.992282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.992465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.992494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.992664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.992695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.992862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.992893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.993079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.993111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.993317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.993348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.993533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.993565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.993672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.993702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.993875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.993907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.994099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.994130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.994244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.994275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.994382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.994414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.994579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.994610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.994731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.994761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.994885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.994916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.995082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.995113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.995362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.995394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.995499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.995531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.995702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.995733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.995849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.995881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.996050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.996087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.996355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.996389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.996639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.996670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.996790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.996821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.357  [2024-12-10 00:13:36.996992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.357  [2024-12-10 00:13:36.997022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.357  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.997136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.997175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.997302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.997333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.997499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.997530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.997766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.997797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.998007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.998039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.998206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.998237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.998338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.998368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.998491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.998523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.998759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.998790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.998985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.999016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.999277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.999310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.999428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.999459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.999668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.999702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.999814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:36.999845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:36.999968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.000000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.000223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.000257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.000494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.000541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.000728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.000761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.000951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.000985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.001155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.001199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.001384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.001418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.001681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.001715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.001850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.001890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.002137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.002177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.002317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.002351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.002540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.002574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.002753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.002786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.002976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.003010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.003130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.003163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.358  [2024-12-10 00:13:37.003294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.358  [2024-12-10 00:13:37.003328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.358  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.003516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.003550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.003669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.003704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.003881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.003915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.004096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.004130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.004396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.004431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.004614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.004648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.004769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.004801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.005014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.005048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.005242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.005277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.005451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.005484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.005676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.005709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.005834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.005867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.006060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.006094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.006289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.006325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.006512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.006546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.006671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.006705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.006951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.006984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.007088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.007131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.007279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.007313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.007439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.007474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.007597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.007631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.007822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.007855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.007975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.008008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.008132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.008185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.008359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.008392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.008572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.008605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.008729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.008764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.009050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.009084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.009294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.009330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.009551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.359  [2024-12-10 00:13:37.009586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.359  qpair failed and we were unable to recover it.
00:32:21.359  [2024-12-10 00:13:37.009776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.009811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.010002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.010035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.010222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.010258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.010438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.010477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.010741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.010775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.010897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.010930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.011049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.011082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.011272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.011307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.011483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.011516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.011624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.011659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.011791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.011824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.012091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.012125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.012312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.012348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.012477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.012510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.012623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.012657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.012833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.012867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.013128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.013161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.013386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.013420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.013529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.013563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.013684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.013718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.013898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.013932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.014130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.014163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.014283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.014316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.014425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.014458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.014646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.014680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.014866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.014898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.015023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.015065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.015237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.015274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.015544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.015578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.360  qpair failed and we were unable to recover it.
00:32:21.360  [2024-12-10 00:13:37.015770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.360  [2024-12-10 00:13:37.015802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.016043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.016076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.016273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.016308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.016440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.016475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.016654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.016688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.016801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.016834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.017095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.017129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.017335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.017372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.017501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.017535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.017650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.017683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.017860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.017894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.018105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.018138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.018386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.018419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.018629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.018664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.018910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.018943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.019128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.019162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.019348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.019383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.019488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.019521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.019701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.019734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.019935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.019968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.020163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.020225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.020465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.020498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.020630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.020665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.020850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.020883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.021126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.021158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.021455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.021489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.021607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.021640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.021818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.021851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.022024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.022057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.022305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.022340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.022541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.022574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.022706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.022739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.361  [2024-12-10 00:13:37.023000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.361  [2024-12-10 00:13:37.023036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.361  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.023297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.023332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.023548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.023580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.023848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.023881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.024006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.024039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.024245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.024280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.024539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.024573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.024756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.024791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.024963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.024997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.025182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.025216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.025461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.025500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.025692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.025725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.025902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.025936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.026128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.026162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.026349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.026383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.026487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.026521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.026725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.026759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.026947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.026980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.027271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.027307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.027495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.027528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.027634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.027667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.027918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.027952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.028215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.028250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.028370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.028416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.028538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.028571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.028748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.028782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.029048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.029083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.029203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.029237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.029488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.029522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.362  [2024-12-10 00:13:37.029695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.362  [2024-12-10 00:13:37.029728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.362  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.029986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.030019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.030146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.030189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.030397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.030430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.030610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.030643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.030826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.030858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.030963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.030996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.031183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.031217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.031333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.031366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.031522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.031555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.031798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.031831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.032004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.032038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.032213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.032248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.032423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.032457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.032700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.032733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.032915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.032949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.033118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.033153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.033453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.033486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.033695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.033728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.033830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.033864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.034049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.034083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.034348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.034382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.034493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.034537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.034731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.034765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.034898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.034932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.035118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.035151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.035266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.035299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.035474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.035507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.363  qpair failed and we were unable to recover it.
00:32:21.363  [2024-12-10 00:13:37.035624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.363  [2024-12-10 00:13:37.035658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.035772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.035806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.035927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.035960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.036143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.036188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.036291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.036324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.036514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.036548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.036740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.036773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.036946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.036980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.037118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.037152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.037376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.037410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.037651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.037684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.037809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.037842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.037968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.038002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.038279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.038313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.038488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.038521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.038651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.038684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.038866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.038899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.039033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.039066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.039268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.039302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.039431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.039464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.039657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.039690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.039897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.039936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.040229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.040264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.040377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.040410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.040602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.040636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.040819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.040851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.040957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.040990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.041184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.041219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.041350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.041384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.041575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.041608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.041786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.041819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.364  [2024-12-10 00:13:37.042059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.364  [2024-12-10 00:13:37.042092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.364  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.042271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.042305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.042549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.042582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.042823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.042856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.042991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.043024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.043156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.043199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.043337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.043370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.043614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.043647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.043784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.043818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.043941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.043974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.044144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.044186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.044371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.044404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.044596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.044629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.044837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.044870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.044995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.045027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.045285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.045319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.045522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.045555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.045742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.045775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.045977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.046010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.046140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.046192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.046303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.046336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.046620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.046653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.046829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.046862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.046977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.047009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.047150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.047193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.047461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.047494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.047612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.047645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.047847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.047881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.048058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.048092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.048303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.048337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.048457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.048490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.365  qpair failed and we were unable to recover it.
00:32:21.365  [2024-12-10 00:13:37.048662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.365  [2024-12-10 00:13:37.048702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.048885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.048918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.049103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.049136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.049268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.049302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.049562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.049595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.049728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.049761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.050022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.050056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.050189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.050223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.050331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.050364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.050485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.050519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.050760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.050793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.050983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.051016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.051288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.051322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.051454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.051487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.051620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.051653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.051873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.051907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.052104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.052138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.052326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.052359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.052570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.052603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.052787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.052820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.053073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.053107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.053378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.053414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.053531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.053564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.053693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.053726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.053924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.053958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.054132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.054193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.054321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.054355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.054495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.054534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.054648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.054681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.054804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.054838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.055022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.055055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.055253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.055288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.055392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.366  [2024-12-10 00:13:37.055425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.366  qpair failed and we were unable to recover it.
00:32:21.366  [2024-12-10 00:13:37.055685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.055719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.055909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.055942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.056133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.056175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.056327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.056360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.056544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.056577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.056758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.056791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.057058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.057092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.057215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.057255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.057435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.057469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.057664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.057697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.057903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.057937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.058147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.058190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.058323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.058357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.058482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.058516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.058719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.058752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.059017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.059050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.059272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.059307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.059573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.059606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.059758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.059792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.060064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.060097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.060314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.060349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.060600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.060633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.060929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.060962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.061097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.061130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.061259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.061293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.061422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.061455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.061625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.061657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.061844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.061878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.062052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.062085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.062332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.062366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.062503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.062536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.062658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.367  [2024-12-10 00:13:37.062691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.367  qpair failed and we were unable to recover it.
00:32:21.367  [2024-12-10 00:13:37.062871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.062904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.063030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.063063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.063210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.063246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.063423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.063463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.063641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.063674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.063789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.063823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.064023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.064057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.064218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.064252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.064372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.064406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.064650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.064683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.064805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.064839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.064961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.064995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.065183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.065217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.065506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.065540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.065726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.065759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.065933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.065966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.066159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.066200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.066379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.066413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.066599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.066633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.066770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.066802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.067048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.067081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.067350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.067386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.067609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.067642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.067776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.067810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.068026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.068058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.068198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.068234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.068361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.068394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.068586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.068619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.368  [2024-12-10 00:13:37.068801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.368  [2024-12-10 00:13:37.068834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.368  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.069029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.069061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.069245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.069286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.069546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.069579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.069696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.069729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.069838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.069871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.070114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.070147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.070266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.070300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.070503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.070536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.070732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.070765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.070956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.070989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.071252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.071287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.071486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.071520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.071714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.071747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.071888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.071921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.072034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.072068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.072286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.072320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.072493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.072526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.072709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.072742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.072864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.072898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.073050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.073083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.073266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.073302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.073502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.073536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.073794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.073827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.074086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.074119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.074303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.074338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.074623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.074656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.074894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.074928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.075105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.075138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.075367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.075400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.075540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.075574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.075745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.075778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.075925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.369  [2024-12-10 00:13:37.075958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.369  qpair failed and we were unable to recover it.
00:32:21.369  [2024-12-10 00:13:37.076246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.076280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.076415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.076448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.076638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.076671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.076788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.076822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.076994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.077027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.077146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.077188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.077362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.077396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.077605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.077638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.077764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.077796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.077966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.078000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.078121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.078165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.078278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.078312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.078577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.078611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.078788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.078821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.078943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.078976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.079221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.079255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.079499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.079533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.079660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.079694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.079869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.079902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.080142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.080186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.080327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.080360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.080482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.080515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.080633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.080667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.080869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.080902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.081093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.081127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.081319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.081353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.081557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.081590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.081774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.081807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.081993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.082026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.082216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.082250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.082421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.082455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.082723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.082756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.370  [2024-12-10 00:13:37.082934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.370  [2024-12-10 00:13:37.082968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.370  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.083229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.083265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.083459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.083493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.083672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.083705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.083826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.083859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.084076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.084110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.084355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.084391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.084653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.084686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.084807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.084840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.084976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.085010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.085261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.085295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.085479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.085511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.085648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.085682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.085805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.085838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.086079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.086112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.086309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.086344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.086525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.086558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.086772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.086805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.086914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.086946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.087135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.087206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.087425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.087459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.087634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.087667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.087841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.087874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.088010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.088043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.088158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.088205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.088471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.088504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.088768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.088801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.088921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.088954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.089153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.089197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.089315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.089348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.089533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.089567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.089776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.089809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.090051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.371  [2024-12-10 00:13:37.090084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.371  qpair failed and we were unable to recover it.
00:32:21.371  [2024-12-10 00:13:37.090217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.090252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.090439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.090473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.090578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.090611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.090805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.090839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.091033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.091067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.091256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.091290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.091509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.091542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.091735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.091769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.091966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.091999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.092272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.092307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.092523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.092558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.092732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.092766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.092897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.092931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.093182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.093223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.093471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.093505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.093703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.093736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.094000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.094033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.094221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.094255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.094433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.094467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.094711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.094743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.095003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.095036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.095226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.095261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.095481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.095514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.095753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.095786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.095983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.096016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.096201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.096235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.096360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.096393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.096575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.096608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.096825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.096859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.097115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.097149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.097303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.097336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.097526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.372  [2024-12-10 00:13:37.097560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.372  qpair failed and we were unable to recover it.
00:32:21.372  [2024-12-10 00:13:37.097745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.097777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.097982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.098017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.098140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.098183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.098370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.098403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.098586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.098619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.098804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.098837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.099025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.099059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.099298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.099332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.099600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.099633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.099880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.099914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.100104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.100137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.100344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.100378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.100494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.100527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.100658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.100691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.100909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.100943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.101073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.101107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.101294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.101328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.101567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.101601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.101866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.101899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.102159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.102201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.102313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.102345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.102608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.102641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.102859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.102893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.103076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.103109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.103386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.103420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.103552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.103584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.103853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.103887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.104011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.104045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.104242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.104276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.104564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.104598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.104816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.104849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.105038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.105072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.373  [2024-12-10 00:13:37.105355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.373  [2024-12-10 00:13:37.105390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.373  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.105502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.105535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.105736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.105770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.105978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.106011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.106234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.106270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.106452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.106486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.106757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.106790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.106908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.106942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.107063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.107096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.107267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.107302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.107493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.107526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.107734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.107767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.107944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.107978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.108152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.108205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.108379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.108412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.108538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.108571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.108689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.108722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.108908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.108946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.109070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.109103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.109282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.109317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.109572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.109605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.109900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.109933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.110151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.110194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.110462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.110494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.110665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.110697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.110939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.110972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.111117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.111150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.111372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.111406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.111540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.111573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.374  qpair failed and we were unable to recover it.
00:32:21.374  [2024-12-10 00:13:37.111812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.374  [2024-12-10 00:13:37.111846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.111973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.112007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.112132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.112177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.112320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.112353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.112527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.112560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.112687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.112721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.112894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.112927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.113175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.113210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.113342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.113376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.113485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.113518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.113710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.113743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.113883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.113915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.114155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.114200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.114326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.114359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.114554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.114586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.114783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.114816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.114933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.114966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.115178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.115212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.115527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.115560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.115805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.115838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.116030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.116063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.116212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.116247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.116447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.116480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.116655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.116687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.116862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.116896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.117016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.117049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.117311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.117344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.117634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.117667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.117806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.117839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.118018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.118056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.118242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.118275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.118475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.118507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.375  [2024-12-10 00:13:37.118683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.375  [2024-12-10 00:13:37.118716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.375  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.118969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.119002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.119263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.119297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.119496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.119529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.119749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.119782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.119975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.120009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.120254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.120289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.120410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.120444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.120689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.120721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.120823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.120856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.121049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.121083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.121284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.121318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.121565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.121598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.121771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.121804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.122017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.122050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.122228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.122263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.122513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.122545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.122734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.122768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.122951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.122986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.123226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.123261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.123402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.123435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.123679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.123712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.123841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.123875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.124020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.124053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.124235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.124280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.124414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.124447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.124716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.124750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.124957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.124990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.125098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.125132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.125317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.125352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.125471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.125504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.125749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.125782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.126025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.376  [2024-12-10 00:13:37.126059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.376  qpair failed and we were unable to recover it.
00:32:21.376  [2024-12-10 00:13:37.126263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.126298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.126490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.126523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.126730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.126763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.126980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.127012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.127125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.127158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.127298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.127332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.127470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.127504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.127674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.127707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.127948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.127982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.128179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.128213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.128344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.128378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.128490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.128521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.128729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.128762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.129054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.129087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.129260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.129296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.129421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.129453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.129690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.129723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.129967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.130001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.130144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.130185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.130297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.130330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.130461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.130495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.130684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.130717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.130831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.130864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.131058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.131091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.131289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.131324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.131440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.131473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.131661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.131695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.131871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.131904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.132082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.132115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.132319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.132354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.132632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.132665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.132844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.132877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.133087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.377  [2024-12-10 00:13:37.133127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.377  qpair failed and we were unable to recover it.
00:32:21.377  [2024-12-10 00:13:37.133282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.133317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.133563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.133597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.133769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.133802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.133906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.133939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.134214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.134249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.134447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.134480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.134650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.134683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.134878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.134911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.135090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.135123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.135269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.135303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.135408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.135441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.135631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.135664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.135905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.135938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.136194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.136229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.136402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.136435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.136693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.136727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.136840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.136874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.137061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.137095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.137283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.137318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.137443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.137476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.137594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.137629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.137884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.137917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.138114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.138147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.138279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.138313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.138424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.138457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.138577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.138609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.138728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.138766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.138954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.138987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.139092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.139125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.139321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.139355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.139595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.139629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.139877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.378  [2024-12-10 00:13:37.139909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.378  qpair failed and we were unable to recover it.
00:32:21.378  [2024-12-10 00:13:37.140178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.140212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.140351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.140384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.140490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.140522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.140751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.140784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.140967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.141001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.141180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.141215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.141457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.141490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.141618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.141651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.141832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.141866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.142062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.142096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.142233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.142268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.142397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.142431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.142708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.142741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.142923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.142956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.143131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.143165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.143349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.143382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.143558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.143592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.143716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.143750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.143972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.144006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.144248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.144282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.144524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.144557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.144678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.144711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.144979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.145012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.145185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.145219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.145337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.145370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.145491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.145524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.145727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.145760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.379  [2024-12-10 00:13:37.146000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.379  [2024-12-10 00:13:37.146033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.379  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.146217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.146252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.146450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.146483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.146768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.146802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.146988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.147020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.147285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.147320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.147585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.147618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.147797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.147830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.148045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.148084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.148280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.148315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.148504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.148537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.148678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.148711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.148902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.148936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.149109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.149141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.149327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.149360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.149466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.149499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.149684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.149718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.149900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.149933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.150225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.150260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.150437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.150470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.150648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.150681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.150805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.150838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.151035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.151070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.151335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.151369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.151497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.151530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.151674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.151707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.151831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.151864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.152076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.152109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.152308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.152342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.152453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.152486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.152623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.152656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.152775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.152808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.152935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.152967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.153086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.153119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.153311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.380  [2024-12-10 00:13:37.153345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.380  qpair failed and we were unable to recover it.
00:32:21.380  [2024-12-10 00:13:37.153478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.153516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.153803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.153836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.153958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.153991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.154134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.154173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.154346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.154379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.154507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.154539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.154716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.154749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.154989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.155022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.155201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.155236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.155362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.155396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.155664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.155697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.155825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.155858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.156101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.156134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.156339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.156373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.156564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.156598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.156731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.156764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.156946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.156979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.157172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.157206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.157389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.157421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.157600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.157633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.157814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.157847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.158039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.158071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.158251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.158285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.158458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.158491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.158706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.158740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.158877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.158910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.159086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.159119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.159390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.159425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.159702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.159735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.159947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.159980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.160245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.160280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.160467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.160500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.160605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.160637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.160830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.160864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.161134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.161174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.381  qpair failed and we were unable to recover it.
00:32:21.381  [2024-12-10 00:13:37.161358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.381  [2024-12-10 00:13:37.161391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.161520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.161553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.161727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.161759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.162050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.162084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.162265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.162299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.162485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.162518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.162709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.162748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.162936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.162968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.163159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.163213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.163400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.163433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.163555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.163589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.163860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.163893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.164079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.164113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.164245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.164279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.164396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.164428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.164598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.164631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.164810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.164844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.165028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.165061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.165248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.165281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.165521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.165554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.165826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.165860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.166058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.166092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.166270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.166304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.166446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.166480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.166748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.166782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.166912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.166945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.167154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.167197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.167332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.167364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.167555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.167588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.167876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.167909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.168032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.168065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.168194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.168228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.168362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.168395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.168636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.168669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.168922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.168956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.382  [2024-12-10 00:13:37.169225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.382  [2024-12-10 00:13:37.169260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.382  qpair failed and we were unable to recover it.
00:32:21.383  [2024-12-10 00:13:37.169402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.383  [2024-12-10 00:13:37.169436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.383  qpair failed and we were unable to recover it.
00:32:21.383  [2024-12-10 00:13:37.169558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.383  [2024-12-10 00:13:37.169591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.383  qpair failed and we were unable to recover it.
00:32:21.383  [2024-12-10 00:13:37.169771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.383  [2024-12-10 00:13:37.169805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.383  qpair failed and we were unable to recover it.
00:32:21.383  [2024-12-10 00:13:37.169935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.383  [2024-12-10 00:13:37.169969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.383  qpair failed and we were unable to recover it.
00:32:21.383  [2024-12-10 00:13:37.170160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.383  [2024-12-10 00:13:37.170204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.383  qpair failed and we were unable to recover it.
00:32:21.664  [2024-12-10 00:13:37.170409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.664  [2024-12-10 00:13:37.170443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.664  qpair failed and we were unable to recover it.
00:32:21.664  [2024-12-10 00:13:37.170638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.664  [2024-12-10 00:13:37.170671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.664  qpair failed and we were unable to recover it.
00:32:21.664  [2024-12-10 00:13:37.170784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.664  [2024-12-10 00:13:37.170818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.170990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.171024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.171206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.171242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.171382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.171416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.171555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.171589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.171723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.171756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.171877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.171910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.172037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.172070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.172254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.172288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.172426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.172459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.172647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.172681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.172806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.172839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.173026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.173059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.173309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.173344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.173521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.173554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.173694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.173727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.173905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.173938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.174129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.174162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.174297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.174331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.174503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.174536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.174719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.174752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.174937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.174969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.175235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.175269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.175516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.175550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.175794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.175827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.176014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.176046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.176157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.176198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.176381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.176414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.176674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.176707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.176952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.176985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.177227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.177262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.177459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.177497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.665  [2024-12-10 00:13:37.177651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.665  [2024-12-10 00:13:37.177684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.665  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.177946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.177980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.178172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.178206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.178380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.178413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.178587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.178620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.178830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.178863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.178993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.179026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.179219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.179253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.179376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.179409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.179612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.179646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.179861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.179894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.180138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.180182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.180396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.180430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.180626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.180660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.180786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.180819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.180952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.180986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.181179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.181213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.181337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.181371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.181636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.181669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.181867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.181900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.182084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.182118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.182232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.182277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.182522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.182555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.182692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.182725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.182986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.183019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.183201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.183236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.183359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.183392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.183667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.183700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.183818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.183852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.184118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.184152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.184359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.184392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.184685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.184718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.666  [2024-12-10 00:13:37.184906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.666  [2024-12-10 00:13:37.184939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.666  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.185073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.185107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.185358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.185392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.185505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.185538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.185673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.185706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.185841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.185874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.185987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.186020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.186309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.186342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.186518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.186556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.186744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.186778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.186966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.187000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.187130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.187163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.187382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.187415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.187618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.187651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.187829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.187862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.188123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.188155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.188371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.188406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.188594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.188628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.188841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.188874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.189001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.189034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.189145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.189190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.189399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.189433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.189702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.189736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.189915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.189948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.190153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.190198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.190417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.190451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.190558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.190591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.190714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.190746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.190944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.190977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.667  [2024-12-10 00:13:37.191086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.667  [2024-12-10 00:13:37.191119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.667  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.191342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.191376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.191642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.191675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.191852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.191886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.192062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.192095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.192289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.192324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.192442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.192481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.192608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.192642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.192766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.192799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.192978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.193012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.193188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.193223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.193411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.193445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.193619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.193652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.193776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.193809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.194000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.194033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.194313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.194348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.194527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.194560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.194693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.194726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.194919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.194952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.195156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.195198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.195333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.195366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.195569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.195603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.195784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.195817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.195950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.195983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.196120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.196154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.196405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.196439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.196549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.196583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.196755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.196788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.197029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.197063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.197257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.197291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.197556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.197588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.197818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.197852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.668  [2024-12-10 00:13:37.198037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.668  [2024-12-10 00:13:37.198070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.668  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.198276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.198310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.198434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.198467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.198662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.198694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.198808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.198840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.199076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.199109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.199237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.199272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.199450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.199484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.199657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.199690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.199944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.199977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.200098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.200132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.200400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.200434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.200622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.200655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.200840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.200873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.201061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.201094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.201283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.201324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.201465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.201498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.201772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.201805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.201933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.201966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.202093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.202127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.202272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.202306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.202482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.202515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.202695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.202729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.202919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.202952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.203213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.203248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.203367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.203400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.203641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.203674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.203887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.203921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.204138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.204178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.204453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.204487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.204669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.204702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.204840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.204873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.669  [2024-12-10 00:13:37.205132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.669  [2024-12-10 00:13:37.205172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.669  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.205389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.205422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.205669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.205702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.205878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.205911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.206121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.206154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.206279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.206313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.206498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.206531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.206709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.206742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.206941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.206974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.207216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.207251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.207444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.207483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.207685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.207718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.207969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.208002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.208123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.208156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.208374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.208408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.208595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.208629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.208893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.208926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.209046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.209078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.209284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.209318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.209566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.209600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.209726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.209760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.209966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.209998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.210241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.210275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  [2024-12-10 00:13:37.210481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.670  [2024-12-10 00:13:37.210513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.670  qpair failed and we were unable to recover it.
00:32:21.670  Read completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Read completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Read completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Read completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Read completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Read completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Write completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Read completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Read completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Write completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Write completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Read completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Write completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Write completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.670  Read completed with error (sct=0, sc=8)
00:32:21.670  starting I/O failed
00:32:21.671  Write completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Read completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Read completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Write completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Write completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Write completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Read completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Read completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Write completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Write completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Read completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Read completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Read completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Read completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Write completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Read completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  Read completed with error (sct=0, sc=8)
00:32:21.671  starting I/O failed
00:32:21.671  [2024-12-10 00:13:37.211256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:32:21.671  [2024-12-10 00:13:37.211631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.211706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.211967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.212006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.212210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.212246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.212422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.212459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.212636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.212668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.212864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.212911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.213115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.213188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.213401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.213437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.213634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.213667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.213872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.213905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.214150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.214206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.214327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.214361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.214482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.214516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.214662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.214695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.214977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.215012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.215126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.215159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.215291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.215324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.215466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.215499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.215742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.215775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.215945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.215978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.216197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.216233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.216474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.216507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.216683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.216715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.217005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.217038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.217230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.217263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.217528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.671  [2024-12-10 00:13:37.217562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.671  qpair failed and we were unable to recover it.
00:32:21.671  [2024-12-10 00:13:37.217801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.217834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.218047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.218080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.218203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.218238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.218484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.218516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.218624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.218657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.218908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.218941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.219147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.219191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.219323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.219357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.219495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.219529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.219788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.219821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.220052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.220085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.220193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.220228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.220431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.220464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.220594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.220626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.220756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.220790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.220996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.221029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.221222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.221257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.221395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.221428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.221624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.221657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.221850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.221885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.222070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.222103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.222296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.222336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.222472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.222506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.222691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.222724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.222936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.222969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.223089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.223124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.223275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.223311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.223554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.223588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.223768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.223801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.223988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.224022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.224207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.224242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.224370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.224404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.672  qpair failed and we were unable to recover it.
00:32:21.672  [2024-12-10 00:13:37.224606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.672  [2024-12-10 00:13:37.224640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.224818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.224851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.225022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.225056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.225279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.225313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.225445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.225479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.225656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.225689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.225878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.225911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.226035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.226068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.226258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.226292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.226548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.226583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.226789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.226822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.226946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.226979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.227222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.227257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.227386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.227419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.227534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.227568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.227823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.227855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.228047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.228080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.228203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.228238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.228431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.228465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.228705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.228738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.228928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.228962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.229205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.229240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.229431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.229465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.229656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.229690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.229811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.229845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.229972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.230006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.230143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.230184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.230307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.230341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.230608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.230642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.230769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.230804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.231074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.673  [2024-12-10 00:13:37.231108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.673  qpair failed and we were unable to recover it.
00:32:21.673  [2024-12-10 00:13:37.231269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.231302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.231415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.231449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.231614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.231647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.231768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.231801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.232008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.232042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.232245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.232281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.232403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.232437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.232564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.232598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.232812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.232845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.233089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.233122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.233338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.233371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.233497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.233531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.233656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.233688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.233829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.233862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.234076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.234110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.234381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.234415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.234620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.234653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.234843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.234877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.234986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.235018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.235308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.235343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.235612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.235645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.235774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.235807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.235924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.235957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.236095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.236129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.236363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.236397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.236601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.236634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.236836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.236875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.237010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.237043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.237215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.237250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.237379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.237412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.237530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.674  [2024-12-10 00:13:37.237563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.674  qpair failed and we were unable to recover it.
00:32:21.674  [2024-12-10 00:13:37.237761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.237794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.238023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.238057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.238275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.238310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.238509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.238542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.238673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.238707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.238983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.239017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.239195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.239229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.239426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.239460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.239661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.239695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.239828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.239861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.239970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.240003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.240122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.240155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.240365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.240400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.240575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.240608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.240789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.240823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.240945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.240979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.241086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.241119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.241351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.241386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.241586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.241619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.241746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.241780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.241882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.241916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.242086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.242119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.242339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.242372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.242643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.242677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.242886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.242918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.243131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.243164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.243295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.243328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.243562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.243596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.243837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.243872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.675  qpair failed and we were unable to recover it.
00:32:21.675  [2024-12-10 00:13:37.244009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.675  [2024-12-10 00:13:37.244042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.244152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.244196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.244329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.244362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.244506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.244540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.244733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.244766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.244976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.245011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.245132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.245176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.245438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.245520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.245673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.245711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.245895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.245930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.246180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.246216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.246435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.246468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.246642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.246676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.246915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.246949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.247185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.247220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.247406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.247439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.247620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.247655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.247834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.247867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.248069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.248102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.248289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.248323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.248521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.248554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.248675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.248709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.248898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.248931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.249061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.249094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.249223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.249259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.249436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.249469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.249656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.249689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.249930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.249963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.250179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.250214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.250346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.676  [2024-12-10 00:13:37.250379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.676  qpair failed and we were unable to recover it.
00:32:21.676  [2024-12-10 00:13:37.250617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.250651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.250814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.250847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.251021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.251055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.251234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.251268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.251386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.251419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.251696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.251729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.251964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.251996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.252263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.252297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.252466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.252499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.252741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.252774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.252888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.252931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.253114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.253147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.253336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.253370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.253560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.253593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.253776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.253810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.254047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.254080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.254194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.254229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.254366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.254407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.254663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.254697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.254873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.254905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.255193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.255227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.255414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.255449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.255562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.255596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.255714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.255747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.255925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.255958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.256135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.256176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.256289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.256324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.256584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.256618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.256722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.256755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.257009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.257043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.257180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.257214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.257342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.677  [2024-12-10 00:13:37.257376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.677  qpair failed and we were unable to recover it.
00:32:21.677  [2024-12-10 00:13:37.257569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.257603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.257791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.257823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.257994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.258028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.258153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.258201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.258374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.258408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.258516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.258549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.258741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.258774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.258946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.258980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.259251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.259285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.259456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.259489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.259615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.259649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.259819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.259854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.260102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.260137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.260284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.260319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.260426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.260460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.260700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.260733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.260885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.260918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.261155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.261198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.261438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.261472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.261592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.261625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.261863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.261896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.262067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.262100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.262338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.262371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.262547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.262579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.262695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.262728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.262903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.262941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.263242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.263276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.263564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.263598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.263802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.263835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.264013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.264046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.264162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.264204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.264393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.678  [2024-12-10 00:13:37.264427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.678  qpair failed and we were unable to recover it.
00:32:21.678  [2024-12-10 00:13:37.264609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.264644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.264758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.264791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.265054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.265087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.265298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.265332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.265510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.265550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.265744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.265778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.265903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.265938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.266128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.266161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.266431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.266465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.266572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.266604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.266715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.266748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.266864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.266898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.267075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.267108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.267312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.267348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.267457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.267491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.267664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.267697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.267827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.267862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.267967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.268001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.268186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.268220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.268427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.268462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.268650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.268685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.268861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.268894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.269089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.269122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.269282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.269317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.269492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.269526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.269699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.269732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.269908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.269943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.270067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.270101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.270218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.270253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.270454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.270487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.270605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.270638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.270761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.679  [2024-12-10 00:13:37.270795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.679  qpair failed and we were unable to recover it.
00:32:21.679  [2024-12-10 00:13:37.270993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.271027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.271143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.271193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.271373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.271407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.271607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.271641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.271828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.271863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.272056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.272090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.272355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.272390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.272524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.272557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.272673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.272708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.272889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.272923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.273050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.273084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.273257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.273291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.273407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.273441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.273552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.273586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.273702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.273737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.273933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.273969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.274142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.274184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.274360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.274395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.274508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.274542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.274736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.274769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.274895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.274929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.275108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.275142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.275284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.275318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.275531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.275564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.275777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.275810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.275943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.275979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.276195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.276231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.276505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.276539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.276749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.276784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.276921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.276955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.277089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.277123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.680  [2024-12-10 00:13:37.277397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.680  [2024-12-10 00:13:37.277432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.680  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.277567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.277600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.277711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.277744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.277872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.277917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.278096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.278131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.278426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.278461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.278602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.278635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.278874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.278909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.279017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.279051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.279222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.279257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.279465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.279506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.279637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.279672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.279862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.279896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.280074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.280111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.280315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.280350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.280478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.280511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.280685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.280719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.280898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.280932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.281201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.281235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.281451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.281487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.281683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.281715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.281919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.281953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.282144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.282186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.282438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.282472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.282660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.282693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.282937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.282970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.283164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.283210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.283387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.283421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.681  [2024-12-10 00:13:37.283537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.681  [2024-12-10 00:13:37.283572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.681  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.283777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.283810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.283928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.283961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.284148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.284194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.284371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.284404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.284579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.284613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.284785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.284818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.284951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.284984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.285183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.285217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.285466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.285500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.285611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.285644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.285884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.285919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.286099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.286133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.286350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.286385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.286503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.286535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.286750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.286784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.286901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.286934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.287056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.287089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.287280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.287314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.287500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.287535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.287723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.287757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.287960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.287993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.288112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.288151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.288418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.288452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.288630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.288663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.288903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.288939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.289056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.289089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.289285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.289320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.289505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.289545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.289721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.289758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.289932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.289965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.290088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.290122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.682  [2024-12-10 00:13:37.290368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.682  [2024-12-10 00:13:37.290403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.682  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.290529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.290563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.290693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.290726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.290911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.290945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.291129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.291164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.291350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.291384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.291568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.291602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.291718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.291753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.291858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.291892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.292018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.292052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.292228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.292263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.292437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.292471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.292675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.292708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.292884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.292917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.293031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.293065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.293249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.293283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.293466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.293499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.293688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.293723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.293837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.293870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.293992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.294026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.294147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.294200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.294443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.294477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.294659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.294693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.294958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.294991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.295164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.295207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.295377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.295410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.295605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.295637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.295811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.295844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.296085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.296120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.296322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.296356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.296475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.296514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.296621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.296654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.683  [2024-12-10 00:13:37.296827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.683  [2024-12-10 00:13:37.296860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.683  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.297054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.297088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.297220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.297255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.297429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.297461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.297593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.297627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.297750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.297784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.297917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.297951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.298133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.298174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.298287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.298320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.298565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.298598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.298791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.298824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.298996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.299030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.299207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.299241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.299457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.299491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.299660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.299694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.299883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.299916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.300091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.300124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.300326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.300360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.300604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.300636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.300753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.300786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.300968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.301001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.301226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.301262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.301439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.301472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.301645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.301678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.301787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.301821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.302028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.302062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.302301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.302336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.302521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.302555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.302672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.302706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.302877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.302910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.303035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.303068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.303252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.303288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.684  [2024-12-10 00:13:37.303417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.684  [2024-12-10 00:13:37.303450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.684  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.303695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.303728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.303911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.303944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.304131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.304164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.304292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.304326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.304514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.304547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.304835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.304874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.304982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.305016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.305208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.305242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.305416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.305449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.305568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.305601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.305738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.305771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.305884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.305918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.306086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.306119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.306344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.306379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.306497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.306530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.306720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.306753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.306887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.306920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.307117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.307150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.307414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.307448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.307629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.307663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.307798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.307833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.308098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.308131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.308354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.308389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.308492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.308526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.308625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.308658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.308848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.308881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.309057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.309090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.309291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.309326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.309501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.309535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.309718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.685  [2024-12-10 00:13:37.309751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.685  qpair failed and we were unable to recover it.
00:32:21.685  [2024-12-10 00:13:37.309924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.309957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.310145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.310189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.310486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.310571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.310853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.310902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.311063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.311102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.311383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.311419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.311596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.311629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.311820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.311854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.312122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.312158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.312455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.312490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.312725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.312776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.312952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.312992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.313259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.313310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.313528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.313562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.313805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.313838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.314022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.314066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.314245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.314280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.314549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.314601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.314813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.314852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.314975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.315008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.315261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.315295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.315499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.315532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.315664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.315697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.315828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.315861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.316051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.316083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.316266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.316300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.316416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.316449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.316689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.686  [2024-12-10 00:13:37.316722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.686  qpair failed and we were unable to recover it.
00:32:21.686  [2024-12-10 00:13:37.316847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.316880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.317015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.317049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.317238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.317273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.317397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.317430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.317550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.317583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.317763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.317796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.317977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.318013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.318190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.318224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.318332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.318366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.318627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.318661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.318777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.318810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.318925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.318958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.319210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.319244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.319371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.319404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.319560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.319617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.319855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.319903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.320037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.320085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.320351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.320395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.320555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.320601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.320802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.320837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.321016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.321049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.321209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.321247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.321482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.321517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.321657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.321690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.321840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.321888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.322185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.322239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.322438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.322474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.322693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.322736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.322909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.322942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.323064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.323098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.323305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.323339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.323526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.687  [2024-12-10 00:13:37.323559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.687  qpair failed and we were unable to recover it.
00:32:21.687  [2024-12-10 00:13:37.323825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.323859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.323979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.324020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.324207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.324242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.324413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.324446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.324564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.324597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.324845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.324878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.325116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.325149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.325423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.325460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.325672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.325705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.325951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.325985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.326150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.326193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.326436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.326469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.326655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.326688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.326919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.326953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.327233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.327268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.327463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.327496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.327698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.327731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.327866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.327900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.328112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.328145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.328276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.328310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.328479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.328512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.328720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.328754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.329085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.329196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.329454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.329502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.329748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.329790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.329903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.329938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.330135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.330187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.330380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.330417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.330655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.330689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.330878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.330928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.331076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.331112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.331372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.688  [2024-12-10 00:13:37.331405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.688  qpair failed and we were unable to recover it.
00:32:21.688  [2024-12-10 00:13:37.331520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.331554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.331765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.331798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.331926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.331959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.332154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.332203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.332389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.332422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.332552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.332585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.332772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.332809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.332983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.333015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.333194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.333228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.333399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.333432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.333635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.333682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.333889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.333935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.334226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.334275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.334575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.334625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.334913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.334970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.335164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.335220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.335408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.335441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.335636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.335670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.335776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.335808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.335907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.335939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.336075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.336109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.336381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.336417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.336521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.336554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.336729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.336761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.336949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.336982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.337101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.337134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.337293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.337342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.337537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.337570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.337773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.337807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.338053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.689  [2024-12-10 00:13:37.338086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.689  qpair failed and we were unable to recover it.
00:32:21.689  [2024-12-10 00:13:37.338277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.338313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.338577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.338610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.338783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.338817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.339071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.339104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.339234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.339269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.339515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.339548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.339719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.339753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.339957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.339989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.340113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.340147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.340273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.340308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.340441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.340474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.340742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.340775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.340899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.340931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.341183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.341224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.341352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.341386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.341513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.341545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.341665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.341697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.341802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.341835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.342041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.342074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.342203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.342240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.342486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.342519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.342783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.342816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.343008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.343042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.343281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.343317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.343503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.343536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.343727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.343760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.343882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.343915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.344105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.344138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.344292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.344331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.344472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.344505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.344684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.344718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.344912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.690  [2024-12-10 00:13:37.344945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.690  qpair failed and we were unable to recover it.
00:32:21.690  [2024-12-10 00:13:37.345136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.345179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.345381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.345414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.345591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.345624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.345753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.345787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.346003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.346036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.346277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.346314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.346489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.346523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.346706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.346738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.347030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.347065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.347254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.347289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.347506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.347539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.347751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.347784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.347999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.348032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.348157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.348203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.348389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.348423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.348551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.348583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.348723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.348756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.348943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.348976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.349093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.349125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.349262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.349296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.349424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.349456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.349586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.349624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.349868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.349902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.350089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.350121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.350278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.350314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.350496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.350529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.350724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.350758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.350883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.350915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.351114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.351146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.351345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.351381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.351499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.351532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.351715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.691  [2024-12-10 00:13:37.351748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.691  qpair failed and we were unable to recover it.
00:32:21.691  [2024-12-10 00:13:37.351915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.351948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.352129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.352161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.352297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.352331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.352450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.352483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.352603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.352635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.352740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.352773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.353054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.353086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.353205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.353241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.353431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.353463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.353654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.353686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.353860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.353892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.354079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.354111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.354405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.354441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.354614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.354646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.354903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.354936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.355107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.355140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.355291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.355327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.355517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.355550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.355671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.355703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.355973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.356007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.356191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.356226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.356346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.356379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.356571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.356605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.356746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.356780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.357059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.357092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.357262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.357296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.357489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.357522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.357641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.357674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.692  qpair failed and we were unable to recover it.
00:32:21.692  [2024-12-10 00:13:37.357814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.692  [2024-12-10 00:13:37.357846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.357964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.357997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.358133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.358195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.358466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.358500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.358604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.358637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.358853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.358885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.359067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.359099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.359283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.359317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.359498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.359531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.359788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.359821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.360073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.360105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.360303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.360337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.360444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.360475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.360662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.360696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.360945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.360978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.361158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.361206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.361447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.361480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.361690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.361722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.361902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.361935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.362108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.362141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.362326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.362359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.362605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.362639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.362917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.362950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.363149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.363197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.363382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.363415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.363603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.363636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.363756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.363789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.364053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.364086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.364259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.364300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.364515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.364547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.364738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.364770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.364954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.364986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.693  qpair failed and we were unable to recover it.
00:32:21.693  [2024-12-10 00:13:37.365105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.693  [2024-12-10 00:13:37.365137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.365271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.365305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.365428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.365462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.365701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.365734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.365837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.365870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.366109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.366142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.366405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.366439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.366570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.366603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.366863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.366895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.367164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.367211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.367483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.367516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.367701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.367734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.367977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.368010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.368202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.368237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.368490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.368522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.368725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.368757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.368930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.368963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.369090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.369123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.369399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.369434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.369550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.369583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.369762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.369794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.369919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.369952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.370085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.370117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.370260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.370295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.370417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.370449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.370659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.370690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.370820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.370852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.371041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.371073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.371245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.371279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.371398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.371431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.371556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.694  [2024-12-10 00:13:37.371589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.694  qpair failed and we were unable to recover it.
00:32:21.694  [2024-12-10 00:13:37.371701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.371734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.371910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.371944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.372046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.372078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.372271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.372306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.372416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.372449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.372643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.372681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.372800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.372832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.373005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.373038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.373234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.373268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.373512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.373546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.373749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.373782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.373971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.374005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.374135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.374193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.374371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.374404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.374588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.374620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.374743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.374775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.375050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.375082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.375265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.375299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.375483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.375516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.375700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.375732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.375836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.375869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.375985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.376017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.376253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.376287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.376534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.376566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.376758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.376790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.376959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.376992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.377212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.377247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.377434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.377466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.377584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.377616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.377802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.377834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.377957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.377990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.378209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.695  [2024-12-10 00:13:37.378242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.695  qpair failed and we were unable to recover it.
00:32:21.695  [2024-12-10 00:13:37.378441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.378474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.378601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.378634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.378899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.378931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.379055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.379088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.379265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.379299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.379424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.379457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.379566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.379599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.379850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.379882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.380075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.380108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.380249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.380282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.380547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.380580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.380752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.380784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.380897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.380930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.381187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.381227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.381367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.381400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.381644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.381676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.381865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.381897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.382037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.382070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.382269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.382304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.382493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.382525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.382766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.382799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.382993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.383025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.383148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.383189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.383315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.383348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.383454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.383484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.383687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.383719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.383842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.383875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.384148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.384191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.384476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.384509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.384682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.384714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.384952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.384985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.696  [2024-12-10 00:13:37.385193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.696  [2024-12-10 00:13:37.385227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.696  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.385399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.385432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.385643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.385677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.385808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.385840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.386026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.386059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.386164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.386217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.386457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.386489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.386679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.386711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.386984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.387017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.387200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.387235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.387425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.387458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.387574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.387607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.387739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.387771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.387946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.387979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.388183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.388218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.388478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.388511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.388649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.388681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.388853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.388885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.389064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.389096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.389220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.389254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.389511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.389544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.389674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.389707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.389893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.389931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.390120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.390153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.697  qpair failed and we were unable to recover it.
00:32:21.697  [2024-12-10 00:13:37.390366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.697  [2024-12-10 00:13:37.390398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.390591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.390623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.390885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.390918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.391053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.391087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.391260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.391295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.391481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.391514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.391696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.391729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.391918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.391950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.392190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.392225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.392414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.392447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.392712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.392745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.392996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.393028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.393302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.393337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.393459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.393492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.393619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.393652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.393839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.393872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.394062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.394095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.394285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.394319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.394450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.394481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.394727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.394759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.394884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.394916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.395098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.395130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.395383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.395418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.395519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.395551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.395814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.395846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.395954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.395987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.396160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.396203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.396411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.396443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.396636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.396668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.396923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.396955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.397072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.397105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.698  qpair failed and we were unable to recover it.
00:32:21.698  [2024-12-10 00:13:37.397299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.698  [2024-12-10 00:13:37.397333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.397454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.397486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.397673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.397705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.397948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.397981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.398102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.398135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.398326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.398360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.398617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.398650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.398832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.398870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.399054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.399086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.399208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.399243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.399434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.399466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.399643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.399675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.399872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.399904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.400198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.400232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.400405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.400437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.400630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.400662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.400837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.400870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.401130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.401163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.401307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.401339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.401465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.401498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.401638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.401671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.401860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.401893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.402081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.402113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.402368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.402402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.402581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.402613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.402722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.402754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.402948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.402981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.403107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.403141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.403257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.403290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.403395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.403428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.403639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.403671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.403909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.403942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.699  [2024-12-10 00:13:37.404189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.699  [2024-12-10 00:13:37.404223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.699  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.404346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.404379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.404507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.404540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.404665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.404698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.404878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.404911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.405093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.405125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.405308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.405342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.405477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.405511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.405687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.405719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.405903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.405935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.406121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.406153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.406345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.406379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.406616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.406648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.406756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.406788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.407046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.407078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.407299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.407339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.407470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.407503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.407627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.407659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.407840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.407872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.408004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.408036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.408154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.408197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.408324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.408358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.408541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.408574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.408781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.408813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.408920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.408953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.409131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.409163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.409285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.409319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.409436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.409469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.409665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.409696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.409891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.409924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.410042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.410074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.410259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.410294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.700  [2024-12-10 00:13:37.410507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.700  [2024-12-10 00:13:37.410538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.700  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.410720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.410752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.410925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.410957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.411217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.411250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.411350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.411383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.411566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.411599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.411735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.411766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.412066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.412099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.412284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.412317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.412518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.412551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.412808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.412841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.413028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.413060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.413276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.413310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.413595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.413627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.413799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.413831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.413940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.413973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.414188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.414222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.414495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.414527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.414702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.414734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.414926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.414958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.415089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.415122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.415340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.415374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.415562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.415596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.415772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.415810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.415993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.416027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.416147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.416189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.416314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.416347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.416475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.416508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.416619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.416652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.416848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.416881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.417089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.417122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.417262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.701  [2024-12-10 00:13:37.417296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.701  qpair failed and we were unable to recover it.
00:32:21.701  [2024-12-10 00:13:37.417422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.417454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.417641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.417673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.417929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.417962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.418156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.418220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.418325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.418357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.418537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.418570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.418759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.418791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.418965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.418998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.419114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.419146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.419350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.419384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.419648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.419680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.419815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.419847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.419966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.419999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.420188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.420221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.420482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.420515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.420687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.420719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.420953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.420986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.421196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.421231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.421367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.421400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.421580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.421613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.421811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.421843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.422108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.422140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.422394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.422429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.422636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.422669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.422870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.422902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.423180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.423214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.423450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.423483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.423674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.423705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.423883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.423916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.424129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.424162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.424364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.424396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.424633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.424670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.702  qpair failed and we were unable to recover it.
00:32:21.702  [2024-12-10 00:13:37.424857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.702  [2024-12-10 00:13:37.424890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.425016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.425050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.425326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.425360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.425552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.425583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.425825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.425857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.426045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.426078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.426266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.426301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.426423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.426455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.426658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.426691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.426952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.426986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.427101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.427133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.427278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.427312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.427519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.427552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.427807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.427840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.427946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.427978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.428162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.428205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.428445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.428477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.428753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.428785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.429068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.429100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.429223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.429257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.429463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.429496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.429708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.429741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.429924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.429957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.430149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.430190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.430384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.430418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.430602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.430633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.430817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.430851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.431035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.431068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.431243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.431276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.431408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.431441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.703  [2024-12-10 00:13:37.431623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.703  [2024-12-10 00:13:37.431656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.703  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.431924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.431957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.432080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.432112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.432311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.432344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.432449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.432481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.432653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.432686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.432895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.432927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.433190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.433224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.433394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.433426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.433706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.433743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.433872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.433905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.434088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.434120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.434321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.434355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.434595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.434628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.434731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.434763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.434889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.434921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.435109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.435141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.435387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.435419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.435662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.435694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.435886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.435919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.436200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.436235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.436340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.436373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.436496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.436529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.436746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.436778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.436992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.437024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.437215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.437249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.437445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.437477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.437645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.437678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.437849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.437882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.437998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.438030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.438144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.438184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.438430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.438462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.438664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.438696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.438901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.704  [2024-12-10 00:13:37.438933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.704  qpair failed and we were unable to recover it.
00:32:21.704  [2024-12-10 00:13:37.439066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.439099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.439283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.439316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.439553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.439640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.439930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.439968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.440159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.440206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.440354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.440388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.440649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.440683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.440878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.440912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.441177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.441211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.441393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.441427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.441613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.441645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.441907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.441940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.442069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.442102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.442205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.442239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.442513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.442545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.442722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.442764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.442940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.442974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.443161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.443207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.443447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.443480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.443592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.443626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.443813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.443846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.444058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.444091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.444229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.444265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.444388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.444421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.444607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.444640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.444833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.444867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.445132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.445175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.445297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.445330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.445451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.445485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.445618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.445651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.445855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.445888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.446151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.446192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.446436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.446469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.446724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.446757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.446889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.446922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.447029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.705  [2024-12-10 00:13:37.447063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.705  qpair failed and we were unable to recover it.
00:32:21.705  [2024-12-10 00:13:37.447327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.447362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.447496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.447529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.447701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.447734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.448002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.448035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.448183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.448217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.448344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.448377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.448553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.448626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.448765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.448803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.449010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.449045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.449247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.449283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.449408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.449441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.449681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.449714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.449827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.449860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.450044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.450075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.450201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.450236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.450411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.450444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.450582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.450615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.450795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.450829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.451016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.451049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.451180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.451223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.451444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.451477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.451676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.451709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.451904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.451936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.452120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.452152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.452409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.452443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.452565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.452599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.452785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.452818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.452993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.453026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.453284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.453318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.453525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.453559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.453734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.453767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.454024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.454056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.454264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.454299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.454507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.454541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.455040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.455073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.455287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.706  [2024-12-10 00:13:37.455322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.706  qpair failed and we were unable to recover it.
00:32:21.706  [2024-12-10 00:13:37.455521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.455554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.455759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.455792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.456059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.456093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.456234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.456269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.456402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.456436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.456613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.456647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.456862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.456894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.457027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.457060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.457326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.457360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.457484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.457516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.457744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.457818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.458013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.458050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.458236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.458272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.458515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.458549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.458722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.458756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.459018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.459052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.459318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.459352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.459484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.459517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.459691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.459724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.459918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.459950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.460056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.460089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.460389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.460424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.460606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.460638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.460838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.460881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.461079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.461112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.461251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.461286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.461466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.461499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.461745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.461780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.461981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.462015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.462261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.462296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.462537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.462570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.462697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.462731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.462985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.463020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.463194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.463228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.463352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.463386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.463569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.463602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.463720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.707  [2024-12-10 00:13:37.463753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.707  qpair failed and we were unable to recover it.
00:32:21.707  [2024-12-10 00:13:37.463965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.463998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.464213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.464245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.464525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.464559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.464744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.464777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.464896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.464929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.465052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.465085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.465209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.465243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.465357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.465391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.465526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.465560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.465831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.465864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.465978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.466012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.466130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.466164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.466428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.466461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.466575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.466612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.466735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.466768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.466956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.466988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.467218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.467251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.467377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.467410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.467534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.467567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.467702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.467735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.467835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.467868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.468055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.468092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.468230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.468266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.468444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.468479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.468668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.468702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.468902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.468935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.469050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.469084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.469269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.469304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.469422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.469457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.469642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.469675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.469874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.469908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.470101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.470134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.470317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.708  [2024-12-10 00:13:37.470352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.708  qpair failed and we were unable to recover it.
00:32:21.708  [2024-12-10 00:13:37.470459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.470493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.470670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.470706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.470836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.470870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.470988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.471021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.471127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.471161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.471306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.471340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.471467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.471500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.471625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.471661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.471952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.471986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.472114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.472147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.472302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.472338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.472473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.472506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.472637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.472670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.472787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.472821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.472987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.473020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.473131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.473164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.473373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.473407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.473582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.473615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.473810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.473843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.474036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.474070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.474276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.474317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.474443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.474477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.474654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.474689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.474955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.474989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.475190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.475225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.475349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.475383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.475626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.475660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.475778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.475812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.476077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.476111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.476252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.476287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.476470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.476503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.476743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.476776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.476993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.477026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.477135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.477177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.477314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.477348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.477588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.477623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.477797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.709  [2024-12-10 00:13:37.477830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.709  qpair failed and we were unable to recover it.
00:32:21.709  [2024-12-10 00:13:37.478092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.478126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.478345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.478382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.478495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.478528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.478646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.478679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.478853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.478885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.479014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.479048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.479246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.479280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.479468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.479501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.479672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.479706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.479908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.479943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.480121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.480154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.480273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.480307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.480498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.480533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.480653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.480686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.480871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.480905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.481096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.481129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.481327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.481360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.481492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.481525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.481653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.481686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.481873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.481906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.482015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.482049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.482224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.482259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.482371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.482404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.482582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.482620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.482861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.482896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.483143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.483186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.483383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.483416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.483618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.483652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.483827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.483861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.483974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.484008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.484217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.484253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.484376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.484411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.484601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.484634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.484774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.484807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.484982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.485016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.485206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.485242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.485429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.485463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.485648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.710  [2024-12-10 00:13:37.485682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.710  qpair failed and we were unable to recover it.
00:32:21.710  [2024-12-10 00:13:37.485876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.485909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.486036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.486069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.486250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.486285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.486480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.486514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.486703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.486736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.486853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.486886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.487062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.487096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.487225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.487259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.487463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.487496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.487617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.487651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.487856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.487889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.488013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.488046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.488204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.488239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.488361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.488395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.488523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.488556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.488672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.488705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.488893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.488928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.489138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.489183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.489442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.489476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.489579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.489614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.489786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.489818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.489937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.489972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.490108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.490141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.490285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.490321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.490502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.490535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.490677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.490716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.490911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.490946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.491132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.491176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.491362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.491397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.491583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.491616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.491728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.491761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.491942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.491975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.492098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.492132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.492309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.492384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.492543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.492579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.492843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.492877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.493066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.493101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.493238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.493272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.493395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.493429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.493610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.711  [2024-12-10 00:13:37.493643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.711  qpair failed and we were unable to recover it.
00:32:21.711  [2024-12-10 00:13:37.493880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.493913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.494086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.494118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.494305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.494354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.494475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.494509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.494633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.494666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.494842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.494875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.495050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.495084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.495254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.495290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.495545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.495578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.495701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.495734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.495852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.495885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.496016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.496050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.496240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.496274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.496465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.496499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.496694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.496727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.496900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.496931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.497052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.497085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.497328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.497363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.497493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.497526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.497743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.497776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.497969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.498002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.498183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.498218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.498396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.498430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.498589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.498622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.498741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.498774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.499012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.499050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.499226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.499262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.712  [2024-12-10 00:13:37.499460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.712  [2024-12-10 00:13:37.499492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.712  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.499672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.499706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.499953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.499985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.500274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.500307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.500497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.500530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.500776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.500810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.500930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.500964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.501209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.501243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.501442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.501475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.501614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.501647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.501759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.501791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.501963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.501996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.502264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.502299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.502541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.502574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.502688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.502721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.502984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.503018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.993  [2024-12-10 00:13:37.503187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.993  [2024-12-10 00:13:37.503220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.993  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.503423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.503456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.503573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.503606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.503780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.503814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.504010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.504042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.504217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.504263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.504443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.504477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.504667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.504701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.504898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.504931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.505072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.505105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.505232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.505267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.505532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.505565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.505702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.505736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.505849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.505882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.506003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.506036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.506233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.506267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.506512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.506545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.506660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.506693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.506823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.506856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.507033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.507066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.507239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.507273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.507382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.507424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.507661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.507700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.507814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.507847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.508050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.508083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.508274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.508309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.508426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.508458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.508633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.508668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.508845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.508878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.509052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.509087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.509197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.509232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.509502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.509536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.994  qpair failed and we were unable to recover it.
00:32:21.994  [2024-12-10 00:13:37.509730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.994  [2024-12-10 00:13:37.509763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.509947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.509981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.510179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.510214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.510341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.510374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.510524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.510558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.510743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.510776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.510898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.510931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.511108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.511141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.511336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.511370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.511500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.511534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.511662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.511695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.511815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.511849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.512091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.512124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.512243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.512277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.512388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.512421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.512613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.512646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.512819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.512850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.513129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.513162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.513292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.513325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.513465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.513498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.513614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.513646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.513748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.513782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.513960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.513993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.514276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.514311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.514499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.514532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.514725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.514758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.515007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.515040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.515154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.515206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.515330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.515364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.515509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.515542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.515674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.515713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.995  qpair failed and we were unable to recover it.
00:32:21.995  [2024-12-10 00:13:37.515837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.995  [2024-12-10 00:13:37.515870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.516041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.516074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.516209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.516244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.516354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.516387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.516650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.516683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.516858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.516890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.517025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.517058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.517176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.517210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.517461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.517495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.517674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.517707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.517881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.517915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.518027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.518059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.518183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.518218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.518352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.518386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.518497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.518531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.518637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.518670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.518885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.518919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.519162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.519206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.519320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.519352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.519556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.519589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.519788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.519823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.520002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.520036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.520179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.520214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.520323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.520357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.520603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.520638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.520771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.520804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.520936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.520969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.521079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.521112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.521236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.521269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.521446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.521479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.521718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.521750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.521937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.996  [2024-12-10 00:13:37.521970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.996  qpair failed and we were unable to recover it.
00:32:21.996  [2024-12-10 00:13:37.522231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.522267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.522466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.522500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.522688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.522722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.522911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.522944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.523119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.523153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.523273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.523306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.523427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.523461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.523568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.523605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.523812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.523845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.523953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.523986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.524173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.524209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.524426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.524460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.524580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.524613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.524827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.524862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.525041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.525075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.525265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.525300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.525475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.525508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.525696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.525729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.525915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.525948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.526127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.526160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.526385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.526418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.526614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.526647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.526833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.526866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.527131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.527164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.527348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.527381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.527553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.527586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.527753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.527798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.527974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.528007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.528181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.528214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.528398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.528432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.528619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.528652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.997  [2024-12-10 00:13:37.528760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.997  [2024-12-10 00:13:37.528794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.997  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.528960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.528992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.529201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.529235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.529365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.529398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.529574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.529607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.529855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.529889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.530070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.530103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.530279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.530314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.530434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.530467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.530642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.530674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.530779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.530812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.531001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.531034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.531139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.531180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.531310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.531343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.531520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.531553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.531654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.531687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.531863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.531907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.532097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.532131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.532354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.532390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.532599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.532632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.532872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.532907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.533150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.533192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.533310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.533343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.533461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.533494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.533678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.533711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.533927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.533961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.534222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.534257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.998  [2024-12-10 00:13:37.534371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.998  [2024-12-10 00:13:37.534403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.998  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.534587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.534621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.534796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.534828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.535081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.535115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.535252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.535285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.535417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.535450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.535570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.535604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.535796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.535829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.535959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.535991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.536157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.536199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.536374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.536406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.536537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.536569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.536674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.536706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.536897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.536931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.537201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.537235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.537355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.537388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.537630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.537664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.537832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.537864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.538165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.538228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.538410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.538442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.538539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.538572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.538690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.538724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.538991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.539023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.539152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.539198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.539379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.539413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.539656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.539689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.539862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.539905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.540014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.540047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.540227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.540262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.540434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.540473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.540676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.540710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.540901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:21.999  [2024-12-10 00:13:37.540933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:21.999  qpair failed and we were unable to recover it.
00:32:21.999  [2024-12-10 00:13:37.541144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.541187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.541403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.541436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.541556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.541589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.541693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.541726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.541918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.541952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.542159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.542199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.542388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.542423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.542617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.542650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.542758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.542791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.542981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.543016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.543208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.543242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.543432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.543466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.543711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.543744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.543881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.543916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.544087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.544119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.544392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.544426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.544670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.544702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.544814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.544845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.544971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.545004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.545195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.545229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.545418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.545450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.545644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.545676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.545849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.545882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.546084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.546117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.546371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.546405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.546579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.546612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.546791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.546825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.546942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.546974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.547097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.547131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.547348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.547383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.547593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.547626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.000  qpair failed and we were unable to recover it.
00:32:22.000  [2024-12-10 00:13:37.547816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.000  [2024-12-10 00:13:37.547849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.547980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.548014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.548281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.548315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.548491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.548523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.548684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.548717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.548890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.548923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.549059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.549097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.549206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.549241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.549357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.549391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.549506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.549538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.549666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.549699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.549876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.549909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.550035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.550068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.550207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.550241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.550451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.550483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.550670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.550703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.550828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.550860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.550976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.551010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.551140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.551200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.551377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.551410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.551539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.551571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.551747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.551779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.551983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.552016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.552234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.552268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.552507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.552539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.552721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.552754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.552879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.552911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.553029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.553062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.553243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.553277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.553455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.553489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.553600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.553633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.553873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.001  [2024-12-10 00:13:37.553905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.001  qpair failed and we were unable to recover it.
00:32:22.001  [2024-12-10 00:13:37.554032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.554065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.554250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.554284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.554394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.554427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.554531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.554564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.554757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.554789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.554967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.555000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.555200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.555234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.555410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.555443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.555629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.555662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.555836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.555870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.556046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.556078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.556271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.556304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.556408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.556441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.556614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.556646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.556822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.556860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.557127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.557161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.557343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.557376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.557507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.557540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.557659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.557692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.557866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.557899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.558102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.558134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.558263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.558297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.558416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.558448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.558570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.558603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.558738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.558771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.558876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.558910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.559107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.559141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.559369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.559403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.559512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.559545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.559658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.559691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.559864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.559897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.002  qpair failed and we were unable to recover it.
00:32:22.002  [2024-12-10 00:13:37.560071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.002  [2024-12-10 00:13:37.560104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.560240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.560275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.560537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.560569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.560676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.560708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.560896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.560929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.561040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.561072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.561264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.561299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.561478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.561511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.561626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.561659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.561784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.561817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.561996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.562029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.562228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.562262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.562384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.562417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.562532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.562564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.562677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.562710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.562930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.562964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.563165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.563229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.563336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.563370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.563476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.563509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.563770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.563803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.563980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.564013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.564139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.564181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.564364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.564397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.564509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.564547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.564672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.564705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.564879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.564912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.565079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.565110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.565247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.565281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.565399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.565430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.565550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.565583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.565766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.565798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.565920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.003  [2024-12-10 00:13:37.565953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.003  qpair failed and we were unable to recover it.
00:32:22.003  [2024-12-10 00:13:37.566123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.566156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.566351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.566384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.566569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.566601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.566784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.566817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.567035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.567067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.567275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.567311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.567510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.567543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.567744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.567777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.568027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.568060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.568257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.568291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.568421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.568453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.568643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.568677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.568795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.568827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.568995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.569028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.569157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.569204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.569330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.569363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.569472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.569505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.569716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.569748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.570000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.570033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.570158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.570198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.570328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.570360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.570559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.570592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.570846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.570879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.570988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.571021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.571144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.571196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.571396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.571429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.571554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.571586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.571822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.004  [2024-12-10 00:13:37.571855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.004  qpair failed and we were unable to recover it.
00:32:22.004  [2024-12-10 00:13:37.572095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.572128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.572331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.572365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.572609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.572642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.572759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.572802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.573000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.573033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.573223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.573257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.573379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.573411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.573600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.573633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.573836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.573868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.574077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.574110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.574300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.574334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.574521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.574554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.574737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.574770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.575007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.575040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.575286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.575320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.575564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.575596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.575786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.575819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.576088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.576121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.576252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.576286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.576396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.576429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.576646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.576679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.576869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.576903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.577173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.577209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.577388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.577420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.577603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.577637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.577772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.577804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.577982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.578015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.578153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.578196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.578386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.578419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.578678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.578711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.578841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.578879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.005  [2024-12-10 00:13:37.579007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.005  [2024-12-10 00:13:37.579038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.005  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.579306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.579340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.579528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.579561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.579743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.579776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.579890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.579923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.580048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.580082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.580254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.580289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.580478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.580511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.580693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.580727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.580901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.580933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.581180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.581215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.581387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.581419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.581551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.581584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.581769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.581803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.582055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.582089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.582273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.582308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.582434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.582466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.582638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.582671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.582852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.582885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.583067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.583099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.583218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.583253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.583424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.583457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.583688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.583721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.583837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.583871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.584135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.584175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.584372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.584405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.584540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.584573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.584749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.584782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.584909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.584942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.585109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.585142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.585333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.585367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.585620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.006  [2024-12-10 00:13:37.585654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.006  qpair failed and we were unable to recover it.
00:32:22.006  [2024-12-10 00:13:37.585830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.585862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.586105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.586139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.586335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.586368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.586606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.586639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.586772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.586805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.586999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.587032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.587246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.587281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.587470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.587507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.587722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.587755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.587865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.587898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.588080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.588112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.588393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.588428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.588543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.588576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.588748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.588780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.588979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.589012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.589200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.589234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.589447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.589481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.589747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.589780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.589978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.590010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.590125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.590158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.590358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.590391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.590636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.590669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.590909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.590941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.591056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.591090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.591345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.591379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.591596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.591629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.591755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.591788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.591904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.591937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.592128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.592161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.592374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.592406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.592578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.592612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.592825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.007  [2024-12-10 00:13:37.592858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.007  qpair failed and we were unable to recover it.
00:32:22.007  [2024-12-10 00:13:37.592987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.593019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.593267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.593301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.593502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.593535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.593748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.593780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.594022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.594055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.594176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.594210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.594451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.594483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.594795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.594827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.595089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.595122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.595320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.595353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.595473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.595505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.595637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.595669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.595860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.595893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.596151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.596211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.596425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.596458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.596579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.596617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.596735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.596773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.596953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.596986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.597101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.597133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.597260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.597295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.597406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.597438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.597704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.597737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.597844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.597875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.598059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.598092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.598217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.598251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.598537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.598571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.598782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.598815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.599018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.599049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.599247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.599280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.599476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.599509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.599726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.599758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.008  [2024-12-10 00:13:37.599932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.008  [2024-12-10 00:13:37.599964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.008  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.600102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.600134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.600314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.600347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.600518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.600550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.600664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.600697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.600868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.600901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.601036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.601069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.601240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.601275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.601403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.601435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.601624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.601657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.601777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.601810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.601929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.601961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.602185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.602219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.602503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.602535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.602712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.602745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.602874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.602907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.603162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.603215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.603333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.603366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.603544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.603576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.603843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.603875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.603997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.604029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.604219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.604253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.604456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.604488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.604672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.604705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.604839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.604877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.605070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.605104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.605307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.605340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.605584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.009  [2024-12-10 00:13:37.605617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.009  qpair failed and we were unable to recover it.
00:32:22.009  [2024-12-10 00:13:37.605835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.605867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.605977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.606010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.606210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.606244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.606429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.606461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.606654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.606686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.606896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.606928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.607186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.607219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.607426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.607458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.607591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.607624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.607745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.607777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.607957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.607990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.608101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.608134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.608260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.608294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.608466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.608497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.608614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.608646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.608748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.608780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.609021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.609054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.609246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.609280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.609397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.609430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.609634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.609666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.609849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.609882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.610052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.610085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.610347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.610381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.610649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.610682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.610870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.610902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.611120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.611152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.611384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.611416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.611608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.611640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.611753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.611786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.612047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.612079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.612337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.612372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.010  [2024-12-10 00:13:37.612564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.010  [2024-12-10 00:13:37.612597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.010  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.612785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.612818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.613006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.613038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.613220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.613254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.613369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.613402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.613597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.613635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.613874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.613907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.614080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.614112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.614359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.614392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.614597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.614630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.614845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.614878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.615118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.615150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.615400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.615433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.615713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.615746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.615876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.615909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.616099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.616131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.616412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.616446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.616626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.616658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.616941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.616973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.617198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.617232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.617471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.617504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.617712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.617745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.617874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.617906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.618094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.618127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.618329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.618362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.618626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.618659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.618781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.618813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.618983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.619017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.619215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.619249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.619371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.619403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.619519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.619551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.619681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.619714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.011  [2024-12-10 00:13:37.619959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.011  [2024-12-10 00:13:37.619992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.011  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.620104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.620137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.620347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.620381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.620618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.620651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.620842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.620874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.621125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.621157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.621358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.621392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.621591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.621623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.621836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.621870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.622080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.622114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.622261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.622294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.622482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.622515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.622737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.622770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.623030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.623067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.623307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.623342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.623464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.623496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.623624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.623656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.623834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.623865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.624057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.624089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.624280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.624315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.624554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.624585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.624763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.624795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.624969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.625000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.625202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.625236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.625368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.625400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.625579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.625611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.625801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.625834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.625947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.625979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.626150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.626190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.626384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.626416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.626601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.626634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.626811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.626843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.012  qpair failed and we were unable to recover it.
00:32:22.012  [2024-12-10 00:13:37.627114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.012  [2024-12-10 00:13:37.627146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.627302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.627335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.627527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.627560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.627729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.627761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.627886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.627919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.628100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.628132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.628261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.628295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.628484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.628516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.628762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.628795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.629055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.629088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.629336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.629369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.629471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.629504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.629698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.629731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.629922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.629954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.630215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.630249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.630426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.630458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.630552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.630583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.630791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.630823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.631059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.631092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.631280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.631314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.631499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.631531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.631728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.631767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.631952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.631985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.632194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.632228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.632413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.632446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.632567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.632599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.632849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.632882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.633054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.633087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.633345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.633380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.633638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.633672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.633890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.633922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.634124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.013  [2024-12-10 00:13:37.634157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.013  qpair failed and we were unable to recover it.
00:32:22.013  [2024-12-10 00:13:37.634371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.634405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.634608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.634641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.634835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.634867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.635056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.635089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.635275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.635309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.635507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.635539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.635778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.635811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.635981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.636014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.636211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.636245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.636486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.636518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.636730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.636763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.637031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.637065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.637359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.637394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.637522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.637554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.637816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.637849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.638036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.638068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.638336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.638371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.638559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.638591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.638704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.638738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.638921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.638955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.639124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.639156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.639404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.639438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.639673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.639706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.639889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.639922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.640126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.640159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.640425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.640458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.640704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.640737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.640842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.640875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.641086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.641120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.641388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.641428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.014  [2024-12-10 00:13:37.641707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.014  [2024-12-10 00:13:37.641740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.014  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.641981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.642014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.642303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.642338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.642605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.642638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.642919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.642952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.643179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.643214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.643413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.643447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.643653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.643686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.643854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.643888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.644125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.644158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.644352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.644387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.644574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.644606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.644778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.644810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.645006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.645040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.645235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.645269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.645507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.645540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.645777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.645811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.646072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.646105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.646286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.646321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.646588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.646620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.646857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.646890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.647070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.647103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.647340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.647374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.647547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.647579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.647709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.647743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.648005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.648037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.648279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.648313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.648449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.648481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.015  qpair failed and we were unable to recover it.
00:32:22.015  [2024-12-10 00:13:37.648739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.015  [2024-12-10 00:13:37.648772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.648963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.648995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.649189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.649224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.649420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.649453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.649647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.649680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.649992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.650025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.650221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.650255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.650443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.650475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.650670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.650703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.650988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.651020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.651292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.651326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.651567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.651606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.651814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.651848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.652040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.652073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.652320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.652355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.652562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.652594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.652856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.652888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.653177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.653212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.653407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.653439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.653625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.653658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.653836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.653869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.654056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.654088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.654297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.654332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.654569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.654602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.654775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.654808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.654984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.655017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.655228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.655262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.655526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.655559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.655729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.655761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.655932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.655966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.656146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.656188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.656375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.016  [2024-12-10 00:13:37.656408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.016  qpair failed and we were unable to recover it.
00:32:22.016  [2024-12-10 00:13:37.656589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.656621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.656912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.656944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.657136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.657178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.657359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.657392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.657506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.657538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.657710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.657743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.658024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.658057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.658321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.658355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.658563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.658597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.658899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.658931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.659191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.659226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.659437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.659471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.659730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.659764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.659977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.660011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.660277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.660311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.660591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.660624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.660811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.660844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.660979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.661012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.661204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.661239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.661416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.661471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.661656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.661689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.661885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.661918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.662153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.662198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.662391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.662424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.662592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.662626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.662886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.662919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.663128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.663161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.663463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.663496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.663688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.663721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.663904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.663937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.664134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.664178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.017  qpair failed and we were unable to recover it.
00:32:22.017  [2024-12-10 00:13:37.664352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.017  [2024-12-10 00:13:37.664385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.664577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.664610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.664754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.664787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.664971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.665005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.665266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.665300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.665478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.665510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.665777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.665810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.666002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.666034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.666204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.666239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.666418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.666451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.666574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.666605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.666796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.666829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.667079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.667112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.667408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.667443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.667705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.667738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.667984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.668017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.668231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.668265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.668439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.668471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.668687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.668721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.668989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.669022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.669216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.669250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.669444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.669476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.669717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.669750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.669962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.669994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.670259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.670293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.670538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.670572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.670710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.670743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.671004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.671036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.671246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.671286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.671503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.671535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.671728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.671761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.672011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.018  [2024-12-10 00:13:37.672043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.018  qpair failed and we were unable to recover it.
00:32:22.018  [2024-12-10 00:13:37.672195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.672228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.672497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.672530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.672774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.672807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.673078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.673111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.673380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.673414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.673603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.673636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.673902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.673935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.674226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.674260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.674476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.674510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.674748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.674782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.675053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.675085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.675288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.675323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.675585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.675618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.675755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.675788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.675962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.675995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.676255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.676289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.676578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.676611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.676899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.676931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.677130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.677163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.677414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.677447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.677745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.677777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.677958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.677991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.678189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.678224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.678420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.678453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.678695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.678727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.678921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.678955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.679223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.679256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.679546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.679578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.679817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.679849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.680036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.680070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.680337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.680371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.019  [2024-12-10 00:13:37.680577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.019  [2024-12-10 00:13:37.680610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.019  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.680784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.680817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.681005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.681039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.681303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.681337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.681637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.681670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.681926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.681964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.682242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.682278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.682556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.682589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.682826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.682859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.683099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.683133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.683401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.683436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.683557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.683591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.683839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.683872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.684110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.684143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.684446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.684480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.684736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.684769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.685015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.685049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.685234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.685268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.685534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.685567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.685856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.685888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.686160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.686205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.686396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.686429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.686621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.686654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.686828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.686861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.687100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.687133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.687385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.687419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.687594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.687627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.687865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.687899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.688137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.020  [2024-12-10 00:13:37.688192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.020  qpair failed and we were unable to recover it.
00:32:22.020  [2024-12-10 00:13:37.688404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.688438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.688685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.688718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.688987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.689020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.689220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.689255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.689433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.689467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.689674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.689707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.689973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.690006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.690201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.690235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.690507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.690540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.690812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.690846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.691127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.691159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.691437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.691471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.691645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.691678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.691816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.691849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.692114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.692147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.692401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.692435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.692612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.692650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.692843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.692877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.693127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.693161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.693352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.693385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.693659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.693692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.693890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.693924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.694034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.694067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.694273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.694308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.694576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.694609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.694785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.694819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.694955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.694988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.695232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.695265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.695559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.695591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.695879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.695912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.696111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.696145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.021  qpair failed and we were unable to recover it.
00:32:22.021  [2024-12-10 00:13:37.696432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.021  [2024-12-10 00:13:37.696467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.696706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.696739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.696980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.697013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.697209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.697243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.697515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.697548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.697736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.697769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.697944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.697977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.698191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.698226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.698408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.698441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.698644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.698677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.698859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.698891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.699204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.699238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.699434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.699472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.699715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.699748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.699944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.699978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.700251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.700286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.700479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.700512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.700763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.700796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.701086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.701119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.701342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.701394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.701577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.701611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.701826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.701859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.702038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.702072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.702250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.702296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.702593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.702627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.702874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.702907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.703185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.703219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.703492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.703525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.703727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.703760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.704017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.704051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.704259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.704293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.704544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.022  [2024-12-10 00:13:37.704578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.022  qpair failed and we were unable to recover it.
00:32:22.022  [2024-12-10 00:13:37.704771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.704805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.705048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.705081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.705295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.705330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.705591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.705625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.705871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.705904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.706143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.706189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.706366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.706400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.706674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.706708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.706895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.706929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.707056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.707089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.707216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.707250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.707428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.707460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.707716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.707750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.708011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.708044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.708223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.708258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.708526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.708558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.708846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.708879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.709190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.709225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.709418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.709452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.709630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.709663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.709906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.709945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.710141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.710184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.710448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.710480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.710671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.710705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.710881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.710915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.711104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.711138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.711337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.711370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.711645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.711678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.711922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.711954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.712146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.712200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.712465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.712497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.023  [2024-12-10 00:13:37.712765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.023  [2024-12-10 00:13:37.712799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.023  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.712993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.713027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.713270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.713305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.713501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.713535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.713808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.713841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.714083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.714116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.714238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.714273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.714572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.714607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.714812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.714846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.715064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.715097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.715293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.715329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.715468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.715501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.715638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.715671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.715846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.715880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.716143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.716185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.716413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.716447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.716659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.716692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.716905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.716941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.717061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.717095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.717322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.717358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.717564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.717598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.717866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.717901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.718077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.718110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.718386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.718421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.718596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.718630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.718762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.718796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.719040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.719075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.719306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.719344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.719649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.719683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.719858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.719898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.720205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.720240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.720510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.720545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.024  [2024-12-10 00:13:37.720728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.024  [2024-12-10 00:13:37.720762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.024  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.721023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.721057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.721179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.721214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.721475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.721508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.721765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.721798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.722096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.722190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.722474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.722509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.722703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.722737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.722916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.722949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.723206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.723240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.723435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.723469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.723669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.723703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.723814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.723847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.724040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.724073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.724339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.724374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.724609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.724643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.724834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.724868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.725045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.725079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.725301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.725336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.725533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.725566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.725744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.725778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.725954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.725989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.726242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.726277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.726451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.726484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.726610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.726644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.726855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.726889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.727066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.727100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.727234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.727269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.727539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.727574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.727714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.727750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.727930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.727966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.728154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.025  [2024-12-10 00:13:37.728210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.025  qpair failed and we were unable to recover it.
00:32:22.025  [2024-12-10 00:13:37.728484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.728518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.728707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.728741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.728994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.729027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.729287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.729323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.729443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.729478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.729687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.729726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.730030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.730064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.730262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.730298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.730498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.730533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.730802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.730835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.730957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.730991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.731255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.731291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.731482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.731516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.731770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.731806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.732086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.732120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.732311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.732347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.732539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.732572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.732818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.732850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.733097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.733126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.733363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.733395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.733644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.733674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.733937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.733967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.734261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.734294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.734568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.734598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.734870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.734900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.735045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.735076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.735274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.735308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.735553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.735582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.026  qpair failed and we were unable to recover it.
00:32:22.026  [2024-12-10 00:13:37.735828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.026  [2024-12-10 00:13:37.735860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.735998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.736032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.736151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.736193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.736337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.736369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.736564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.736597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.736730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.736761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.736977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.737010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.737306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.737340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.737608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.737640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.737906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.737937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.738132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.738176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.738391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.738424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.738622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.738658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.738926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.738960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.739160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.739210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.739403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.739438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.739656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.739691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.739816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.739856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.740063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.740097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.740401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.740436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.740640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.740676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.740976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.741011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.741254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.741290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.741503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.741536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.741677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.741713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.741989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.742023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.742159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.742204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.742496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.742532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.742810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.742844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.743049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.743083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.743283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.743318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.743598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.027  [2024-12-10 00:13:37.743635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.027  qpair failed and we were unable to recover it.
00:32:22.027  [2024-12-10 00:13:37.743778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.743813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.744070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.744104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.744349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.744385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.744577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.744610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.744803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.744838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.745018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.745054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.745249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.745285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.745487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.745520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.745651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.745684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.745986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.746021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.746224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.746260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.746454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.746488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.746767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.746802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.747053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.747088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.747365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.747402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.747516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.747549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.747770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.747804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.748018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.748052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.748244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.748280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.748562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.748598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.748898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.748931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.749138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.749195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.749466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.749502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.749616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.749652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.749870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.749904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.750192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.750233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.750492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.750527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.750781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.750815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.751064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.751098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.751373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.751410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.751595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.751629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.751808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.028  [2024-12-10 00:13:37.751843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.028  qpair failed and we were unable to recover it.
00:32:22.028  [2024-12-10 00:13:37.752025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.752059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.752259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.752297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.752508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.752542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.752733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.752767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.752916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.752950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.753068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.753102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.753319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.753355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.753492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.753526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.753658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.753692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.753950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.753984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.754177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.754211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.754392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.754426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.754675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.754709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.754998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.755035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.755254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.755290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.755476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.755509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.755628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.755664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.755864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.755898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.756010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.756042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.756248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.756284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.756494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.756528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.756802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.756836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.757059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.757094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.757277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.757312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.757510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.757544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.757796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.757830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.758052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.758086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.758305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.758340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.758539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.758574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.758799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.758835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.759088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.759124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.759495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.029  [2024-12-10 00:13:37.759532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.029  qpair failed and we were unable to recover it.
00:32:22.029  [2024-12-10 00:13:37.759786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.759820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.760140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.760194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.760379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.760411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.760610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.760644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.760841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.760875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.761069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.761104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.761357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.761393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.761578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.761612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.761798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.761832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.762053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.762088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.762342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.762378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.762505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.762542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.762812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.762845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.763096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.763131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.763439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.763477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.763777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.763813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.764015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.764050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.764257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.764293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.764511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.764545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.764823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.764858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.765133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.765176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.765364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.765399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.765663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.765697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.765948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.765984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.766190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.766225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.766488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.766523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.766716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.766751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.767023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.767058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.767255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.767291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.767545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.767579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.767833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.767868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.768120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.030  [2024-12-10 00:13:37.768154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.030  qpair failed and we were unable to recover it.
00:32:22.030  [2024-12-10 00:13:37.768348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.768382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.768634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.768668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.768795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.768828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.769049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.769083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.769294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.769329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.769469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.769503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.769785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.769819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.770073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.770106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.770299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.770335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.770597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.770640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.770894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.770929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.771201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.771239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.771546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.771580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.771860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.771894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.772179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.772214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.772470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.772504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.772790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.772824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.773030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.773064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.773249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.773284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.773493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.773527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.773728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.773761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.773938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.773973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.774157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.774204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.774396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.774430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.774628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.774662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.774881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.774915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.775034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.775068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.775264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.775299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.775485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.775518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.775782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.775817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.031  [2024-12-10 00:13:37.776003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.031  [2024-12-10 00:13:37.776039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.031  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.776182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.776218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.776402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.776436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.776621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.776655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.776854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.776889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.777038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.777071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.777403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.777440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.777579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.777612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.777829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.777864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.778053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.778087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.778210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.778243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.778442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.778475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.778762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.778796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.778923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.778956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.779156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.779203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.779455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.779488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.779683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.779718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.779997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.780032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.780163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.780211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.780354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.780396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.780582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.780616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.780818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.780853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.781038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.781071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.781260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.781296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.781570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.781604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.781806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.781840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.782152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.782199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.782440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.782476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.782672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.782706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.782934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.782968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.783191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.783227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.783434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.783468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.783749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.783783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.784049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.784084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.784212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.784249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.032  qpair failed and we were unable to recover it.
00:32:22.032  [2024-12-10 00:13:37.784446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.032  [2024-12-10 00:13:37.784479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.784810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.784845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.785033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.785068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.785296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.785333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.785640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.785674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.785858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.785893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.786182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.786219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.786487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.786520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.786773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.786807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.787016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.787050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.787232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.787267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.787504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.787539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.787841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.787874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.788135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.788194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.788403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.788437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.788563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.788597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.788871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.788907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.789094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.789130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.789439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.789474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.789595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.789629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.789816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.789851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.789966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.790000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.790267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.790303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.790585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.790620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.790827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.790867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.791123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.791158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.791356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.791392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.791600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.791634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.791814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.791848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.792124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.792157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.792350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.792386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.792576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.792609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.792860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.792895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.793209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.793244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.793519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.793556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.793684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.793717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.793993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.033  [2024-12-10 00:13:37.794027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.033  qpair failed and we were unable to recover it.
00:32:22.033  [2024-12-10 00:13:37.794228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.794263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.794568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.794608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.794818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.794853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.795142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.795185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.795433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.795468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.795728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.795764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.795989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.796025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.796290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.796325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.796620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.796657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.796865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.796899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.797203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.797238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.797524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.797558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.797677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.797710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.797932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.797966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.798182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.798219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.798547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.798582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.798791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.798824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.799100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.799136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.799425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.799460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.799676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.799710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.799987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.800021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.800179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.800213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.800338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.800373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.800626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.800660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.800843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.800876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.801096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.801129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.801289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.801324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.801506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.801546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.801841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.801874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.802076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.802111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.802400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.802435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.802729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.802762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.802947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.802982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.803197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.803233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.803413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.803447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.803633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.803666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.803950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.034  [2024-12-10 00:13:37.803984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.034  qpair failed and we were unable to recover it.
00:32:22.034  [2024-12-10 00:13:37.804212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.804250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.804523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.804558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.804768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.804803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.805002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.805037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.805184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.805219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.805405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.805438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.805716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.805750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.805945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.805980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.806127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.806160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.806463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.806498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.806715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.806748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.806945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.806979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.807196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.807231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.807488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.807522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.807824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.807858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.808059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.808095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.808350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.808388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.808579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.808613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.808894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.808928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.809053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.809087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.809289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.809325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.809610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.809642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.809872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.809908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.810132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.810177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.810382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.810418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.810650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.810684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.810887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.810920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.811190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.811225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.811416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.811449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.811722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.811757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.812025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.812065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.812281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.812317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.812514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.812548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.812754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.812788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.813070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.813104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.813385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.813420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.813675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.813709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.035  [2024-12-10 00:13:37.813910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.035  [2024-12-10 00:13:37.813946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.035  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.814204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.814241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.814422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.814456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.814738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.814773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.814977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.815011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.815165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.815208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.815510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.815545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.815692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.815726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.815919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.815953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.816106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.816142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.816340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.816374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.816558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.816592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.816871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.816905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.817108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.817141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.817358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.817393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.817667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.817701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.817969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.818003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.818299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.818335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.818521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.818555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.818816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.818850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.819079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.819114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.819375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.819411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.819727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.819760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.819989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.820023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.820282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.820318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.820572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.820606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.820886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.820922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.821201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.821238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.821389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.821425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.821628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.821662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.821919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.821952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.822242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.822279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.036  qpair failed and we were unable to recover it.
00:32:22.036  [2024-12-10 00:13:37.822551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.036  [2024-12-10 00:13:37.822587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.822845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.822886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.823180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.823215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.823418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.823452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.823657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.823692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.823911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.823945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.824065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.824099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.824394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.824429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.824616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.824650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.824911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.824944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.825128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.825163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.825465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.825499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.825776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.825810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.826012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.826045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.826237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.826274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.826575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.826609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.826870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.826904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.827130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.827163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.827370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.827404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.827595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.827629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.827895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.827929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.828186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.828221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.828427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.828461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.828756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.828790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.829059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.829093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.037  [2024-12-10 00:13:37.829309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.037  [2024-12-10 00:13:37.829347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.037  qpair failed and we were unable to recover it.
00:32:22.316  [2024-12-10 00:13:37.829628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.316  [2024-12-10 00:13:37.829662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.316  qpair failed and we were unable to recover it.
00:32:22.316  [2024-12-10 00:13:37.829885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.316  [2024-12-10 00:13:37.829919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.316  qpair failed and we were unable to recover it.
00:32:22.316  [2024-12-10 00:13:37.830114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.316  [2024-12-10 00:13:37.830156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.316  qpair failed and we were unable to recover it.
00:32:22.316  [2024-12-10 00:13:37.830427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.316  [2024-12-10 00:13:37.830463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.316  qpair failed and we were unable to recover it.
00:32:22.316  [2024-12-10 00:13:37.830651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.316  [2024-12-10 00:13:37.830685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.316  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.830969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.831004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.831264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.831301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.831597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.831630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.831893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.831927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.832115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.832149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.832430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.832464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.832656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.832690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.832960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.832995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.833249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.833286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.833592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.833628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.833884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.833920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.834210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.834246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.834534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.834568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.834763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.834797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.834925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.834959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.835245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.835281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.835489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.835522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.835806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.835842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.836041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.836076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.836334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.836370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.836661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.836695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.836991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.837026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.837164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.837223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.837504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.837538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.837739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.837776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.837961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.837995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.838190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.838225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.838415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.838449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.838722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.317  [2024-12-10 00:13:37.838757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.317  qpair failed and we were unable to recover it.
00:32:22.317  [2024-12-10 00:13:37.838872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.838908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.839193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.839227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.839423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.839457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.839767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.839802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.840060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.840094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.840396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.840431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.840694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.840729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.841010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.841044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.841328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.841369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.841486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.841521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.841797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.841833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.842030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.842064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.842369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.842405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.842681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.842715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.842966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.843001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.843120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.843155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.843449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.843483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.843685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.843721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.843940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.843978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.844254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.844290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.844483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.844518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.844705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.844739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.845000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.845036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.845324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.845362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.845632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.845666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.845961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.845997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.846265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.846302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.846591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.846628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.846898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.846934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.847157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.847204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.847395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.318  [2024-12-10 00:13:37.847429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.318  qpair failed and we were unable to recover it.
00:32:22.318  [2024-12-10 00:13:37.847706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.847740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.847945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.847981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.848178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.848214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.848439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.848472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.848738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.848772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.848977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.849009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.849227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.849263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.849463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.849497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.849680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.849715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.849981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.850016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.850216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.850252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.850529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.850563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.850765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.850800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.850915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.850950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.851136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.851183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.851413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.851447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.851584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.851618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.851738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.851778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.852054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.852088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.852294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.852329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.852511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.852545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.852718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.852753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.853059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.853094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.853374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.853409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.853613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.853647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.853842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.853876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.854018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.854052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.854332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.854368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.854617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.854651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.319  qpair failed and we were unable to recover it.
00:32:22.319  [2024-12-10 00:13:37.854911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.319  [2024-12-10 00:13:37.854945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.855163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.855207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.855346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.855381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.855568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.855602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.855815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.855850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.856034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.856068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.856189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.856225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.856471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.856505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.856705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.856738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.856986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.857021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.857305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.857342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.857619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.857652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.857858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.857893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.858180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.858216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.858492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.858526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.858748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.858783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.859057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.859092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.859385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.859421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.859634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.859669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.859909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.859943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.860248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.860285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.860493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.860528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.860710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.860746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.861028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.861063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.861253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.861289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.861503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.861538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.861721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.861755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.861924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.861959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.862165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.862217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.862434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.862468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.862741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.862776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.320  [2024-12-10 00:13:37.862970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.320  [2024-12-10 00:13:37.863005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.320  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.863249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.863284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.863470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.863505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.863686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.863720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.863924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.863959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.864083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.864119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.864383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.864419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.864606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.864640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.864771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.864805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.865014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.865050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.865270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.865307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.865509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.865543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.865772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.865806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.866003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.866038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.866223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.866259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.866514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.866547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.866830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.866864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.867006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.867040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.867253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.867288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.867578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.867612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.867885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.867920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.868187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.868223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.868346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.868380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.868524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.868559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.868751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.868787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.869041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.869075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.869331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.869367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.869640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.869674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.869961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.869996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.870197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.870233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.870418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.321  [2024-12-10 00:13:37.870453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.321  qpair failed and we were unable to recover it.
00:32:22.321  [2024-12-10 00:13:37.870581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.870615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.870799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.870834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.871091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.871126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.871414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.871449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.871646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.871681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.871875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.871909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.872164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.872218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.872418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.872452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.872596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.872631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.872836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.872870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.873208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.873245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.873558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.873592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.873868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.873903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.874097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.874131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.874396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.874431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.874712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.874746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.874956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.874990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.875273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.875309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.875570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.875604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.875893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.875928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.876065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.876100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.876293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.876328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.876583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.876618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.876920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.876954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.877253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.322  [2024-12-10 00:13:37.877290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.322  qpair failed and we were unable to recover it.
00:32:22.322  [2024-12-10 00:13:37.877473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.877507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.877700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.877734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.877934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.877969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.878251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.878287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.878562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.878597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.878880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.878915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.879222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.879258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.879387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.879421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.879618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.879652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.879859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.879894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.880084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.880118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.880336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.880373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.880626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.880660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.880921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.880957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.881100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.881135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.881369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.881405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.881607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.881641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.881847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.881882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.882078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.882111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.882310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.882345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.882628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.882662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.882920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.882961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.883196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.883232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.883424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.883458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.883571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.883605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.883880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.883915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.884122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.884156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.884421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.884457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.884646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.884680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.884874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.884909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.323  [2024-12-10 00:13:37.885206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.323  [2024-12-10 00:13:37.885241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.323  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.885495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.885529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.885663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.885697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.885899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.885934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.886194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.886230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.886439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.886475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.886738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.886772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.886997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.887032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.887287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.887325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.887624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.887658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.887909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.887944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.888251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.888288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.888531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.888566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.888850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.888884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.889139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.889211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.889479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.889513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.889790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.889825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.890114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.890148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.890376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.890411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.890669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.890704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.890958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.890992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.891293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.891328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.891605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.891640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.891918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.891953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.892244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.892279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.892552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.892586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.892813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.892848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.893129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.893164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.893445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.893480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.893762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.324  [2024-12-10 00:13:37.893797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.324  qpair failed and we were unable to recover it.
00:32:22.324  [2024-12-10 00:13:37.893982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.894015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.894277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.894319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.894509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.894544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.894746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.894780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.895031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.895065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.895358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.895394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.895662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.895696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.895878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.895912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.896199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.896241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.896543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.896578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.896774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.896808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.897090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.897125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.897279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.897315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.897435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.897469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.897654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.897688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.898034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.898068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.898349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.898385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.898662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.898696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.898900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.898935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.899217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.899253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.899529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.899563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.899703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.899737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.900014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.900048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.900304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.900340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.900537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.900571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.900846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.900881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.901162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.901206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.901479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.901513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.901793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.901828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.902109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.902143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.902425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.902460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.325  [2024-12-10 00:13:37.902650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.325  [2024-12-10 00:13:37.902684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.325  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.902949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.902982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.903194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.903230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.903502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.903537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.903723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.903758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.904025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.904059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.904261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.904297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.904481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.904514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.904791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.904826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.905109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.905144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.905393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.905433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.905738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.905772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.906011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.906046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.906272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.906308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.906494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.906527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.906712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.906745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.906942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.906977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.907253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.907289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.907572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.907606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.907886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.907920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.908208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.908246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.908457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.908491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.908676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.908710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.908913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.908947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.909230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.909266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.909495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.909529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.909724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.909759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.909952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.909986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.910192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.910228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.910513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.910547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.910672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.910706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.910835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.910870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.326  qpair failed and we were unable to recover it.
00:32:22.326  [2024-12-10 00:13:37.911149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.326  [2024-12-10 00:13:37.911193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.911461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.911496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.911683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.911718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.911986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.912020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.912180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.912216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.912497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.912531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.912726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.912760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.913018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.913053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.913249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.913284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.913476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.913509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.913688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.913721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.913998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.914033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.914175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.914209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.914486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.914520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.914793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.914827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.915029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.915063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.915320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.915355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.915658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.915692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.915822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.915862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.916135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.916179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.916438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.916472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.916705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.916739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.916963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.916997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.917261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.917297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.917495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.917528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.917785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.917819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.918009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.918043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.918324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.918359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.918555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.918589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.918868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.918903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.919187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.919222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.919410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.327  [2024-12-10 00:13:37.919443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.327  qpair failed and we were unable to recover it.
00:32:22.327  [2024-12-10 00:13:37.919655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.919689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.919872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.919906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.920180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.920215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.920416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.920451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.920710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.920743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.920966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.921001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.921299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.921335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.921603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.921637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.921847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.921881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.922075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.922110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.922323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.922359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.922637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.922671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.922896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.922931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.923253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.923289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.923588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.923622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.923829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.923863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.924000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.924034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.924221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.924258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.924538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.924572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.924805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.924839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.925112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.925146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.925378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.925413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.925715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.925749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.926008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.926043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.926164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.926211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.926487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.926521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.926777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.328  [2024-12-10 00:13:37.926822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.328  qpair failed and we were unable to recover it.
00:32:22.328  [2024-12-10 00:13:37.927121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.927156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.927462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.927498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.927631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.927665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.927873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.927907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.928127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.928160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.928426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.928461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.928665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.928699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.929002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.929036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.929226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.929264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.929468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.929502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.929755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.929789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.929979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.930014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.930210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.930244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.930396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.930429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.930646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.930680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.930954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.930989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.931280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.931317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.931534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.931569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.931821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.931854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.932194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.932230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.932453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.932487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.932762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.932796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.933080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.933114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.933307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.933342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.933595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.933628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.933813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.933847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.933976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.934010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.934262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.934298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.934574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.934608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.934859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.934894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.935113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.935146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.935427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.329  [2024-12-10 00:13:37.935461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.329  qpair failed and we were unable to recover it.
00:32:22.329  [2024-12-10 00:13:37.935665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.935700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.935981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.936016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.936124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.936157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.936442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.936476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.936672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.936706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.936901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.936935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.937209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.937244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.937549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.937589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.937864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.937898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.938151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.938197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.938382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.938417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.938610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.938644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.938837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.938872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.938998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.939031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.939329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.939364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.939571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.939606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.939812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.939847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.940027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.940061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.940248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.940283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.940569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.940602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.940867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.940902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.941205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.941241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.941493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.941526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.941800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.941834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.942117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.942151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.942307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.942342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.942618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.942652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.942840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.942873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.943141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.943187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.943390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.943423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.943607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.943641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.943838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.330  [2024-12-10 00:13:37.943871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.330  qpair failed and we were unable to recover it.
00:32:22.330  [2024-12-10 00:13:37.944054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.944089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.944349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.944385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.944618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.944653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.944845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.944880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.945090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.945125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.945250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.945285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.945580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.945614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.945910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.945945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.946154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.946198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.946470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.946504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.946698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.946732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.946932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.946966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.947116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.947150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.947449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.947484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.947756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.947791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.947982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.948022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.948302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.948339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.948560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.948593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.948792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.948826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.949126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.949160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.949454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.949489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.949706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.949739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.950038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.950073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.950288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.950324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.950588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.950622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.950915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.950950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.951223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.951258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.951403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.951437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.951623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.951656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.951944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.951979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.331  [2024-12-10 00:13:37.952236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.331  [2024-12-10 00:13:37.952272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.331  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.952571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.952605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.952888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.952922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.953204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.953238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.953519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.953553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.953832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.953867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.954149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.954192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.954374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.954407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.954690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.954723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.955037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.955073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.955352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.955387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.955499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.955533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.955815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.955849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.956071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.956105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.956417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.956454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.956648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.956681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.956955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.956990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.957238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.957275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.957476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.957509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.957790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.957823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.958075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.958109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.958390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.958425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.958678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.958712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.959012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.959046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.959191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.959227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.959424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.959464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.959675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.959709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.960013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.960048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.960336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.960372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.960644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.332  [2024-12-10 00:13:37.960678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.332  qpair failed and we were unable to recover it.
00:32:22.332  [2024-12-10 00:13:37.960968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.961002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.961298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.961334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.961595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.961628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.961756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.961791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.962047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.962081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.962365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.962400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.962611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.962644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.962923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.962957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.963151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.963209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.963495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.963529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.963830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.963865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.964130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.964176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.964462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.964497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.964761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.964796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.965013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.965047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.965329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.965365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.965547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.965581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.965722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.965759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.965944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.965980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.966252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.966286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.966484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.966518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.966716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.966749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.967028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.967062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.967250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.967286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.967546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.967581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.967712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.333  [2024-12-10 00:13:37.967747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.333  qpair failed and we were unable to recover it.
00:32:22.333  [2024-12-10 00:13:37.967950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.967984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.968261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.968295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.968548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.968582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.968847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.968881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.969161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.969211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.969400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.969434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.969728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.969761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.970030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.970064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.970249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.970284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.970571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.970607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.970870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.970907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.971133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.971193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.971501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.971536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.971816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.971851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.972054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.972090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.972395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.972431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.972690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.972724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.972911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.972946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.973125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.973161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.973448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.973485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.973757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.973790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.974055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.974091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.974284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.974322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.974515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.974548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.974756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.974790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.975007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.975042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.975238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.975275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.975479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.975515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.975723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.975757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.334  [2024-12-10 00:13:37.975959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.334  [2024-12-10 00:13:37.975996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.334  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.976186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.976223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.976505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.976540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.976733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.976769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.977021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.977056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.977357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.977393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.977656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.977690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.977972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.978011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.978308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.978343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.978601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.978638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.978918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.978953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.979097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.979133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.979284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.979319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.979520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.979554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.979762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.979796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.980011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.980045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.980351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.980386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.980590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.980624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.980818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.980851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.981006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.981039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.981252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.981286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.981554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.981589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.981845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.981879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.982191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.982227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.982499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.982534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.982814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.982848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.983105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.983140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.983279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.983316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.983497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.983531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.983716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.983750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.984024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.984058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.984259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.335  [2024-12-10 00:13:37.984296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.335  qpair failed and we were unable to recover it.
00:32:22.335  [2024-12-10 00:13:37.984569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.984602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.984883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.984917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.985223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.985260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.985465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.985499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.985687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.985722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.985856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.985893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.986050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.986084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.986210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.986245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.986454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.986489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.986717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.986752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.986993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.987027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.987215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.987251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.987531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.987566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.987842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.987877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.988135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.988182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.988371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.988412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.988676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.988711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.988893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.988927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.989190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.989225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.989508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.989544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.989813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.989847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.990129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.990192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.990433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.990467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.990686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.990721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.990912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.990946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.991223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.991259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.991518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.991555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.991751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.991786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.991933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.991970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.992159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.992203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.992404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.336  [2024-12-10 00:13:37.992441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.336  qpair failed and we were unable to recover it.
00:32:22.336  [2024-12-10 00:13:37.992705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.992742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.993028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.993061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.993259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.993295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.993546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.993579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.993764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.993799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.993985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.994020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.994150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.994210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.994398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.994431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.994645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.994679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.994862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.994894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.995102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.995136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.995362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.995397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.995679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.995713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.995826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.995859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.996136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.996182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.996479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.996511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.996773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.996807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.996993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.997026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.997232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.997269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.997525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.997560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.997836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.997870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.998135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.998193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.998404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.998437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.998711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.998745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.998943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.998983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.999242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.999280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.999476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.999510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:37.999693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:37.999726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:38.000004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.337  [2024-12-10 00:13:38.000039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.337  qpair failed and we were unable to recover it.
00:32:22.337  [2024-12-10 00:13:38.000164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.000207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.000424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.000458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.000573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.000606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.000786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.000819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.001071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.001106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.001328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.001365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.001598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.001632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.001915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.001949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.002163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.002213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.002407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.002440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.002565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.002602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.002722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.002759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.003022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.003060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.003268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.003304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.003561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.003594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.003860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.003894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.004150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.004195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.004453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.004487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.004696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.004730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.005000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.005035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.005325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.005361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.005661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.005696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.005981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.006017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.006204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.006240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.006423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.006458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.006642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.006677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.006934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.006967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.007149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.007195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.007310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.007343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.007472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.007506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.007782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.338  [2024-12-10 00:13:38.007814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.338  qpair failed and we were unable to recover it.
00:32:22.338  [2024-12-10 00:13:38.008079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.008113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.008322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.008359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.008549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.008584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.008800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.008833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.009111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.009151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.009281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.009316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.009541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.009577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.009775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.009808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.010029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.010064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.010265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.010301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.010538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.010572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.010824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.010858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.011078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.011111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.011264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.011298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.011489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.011522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.011798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.011832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.012017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.012050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.012187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.012221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.012433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.012468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.012598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.012631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.012827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.012860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.013059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.013094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.013285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.013319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.013515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.013549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.013749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.339  [2024-12-10 00:13:38.013784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.339  qpair failed and we were unable to recover it.
00:32:22.339  [2024-12-10 00:13:38.013935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.013969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.014250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.014302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.014574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.014609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.014745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.014781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.014931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.014964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.015082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.015117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.015253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.015287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.015473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.015506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.015755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.015789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.016085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.016118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.016251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.016286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.016541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.016575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.016773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.016807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.016962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.016997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.017195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.017230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.017413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.017448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.017629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.017663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.017777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.017811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.018008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.018042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.018237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.018278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.018509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.018543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.018740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.018773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.018911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.018945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.019139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.019185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.019487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.019521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.019784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.019817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.020042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.020076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.020284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.020319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.020590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.020624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.020928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.020961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.021238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.021273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.340  [2024-12-10 00:13:38.021480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.340  [2024-12-10 00:13:38.021513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.340  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.021709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.021743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.022027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.022061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.022341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.022376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.022568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.022601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.022862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.022895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.023078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.023112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.023397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.023433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.023622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.023656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.023778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.023812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.024092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.024126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.024319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.024354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.024631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.024665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.024862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.024896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.025182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.025217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.025490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.025524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.025778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.025813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.025939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.025974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.026231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.026265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.026448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.026482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.026692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.026726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.026920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.026955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.027090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.027123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.027333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.027368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.027688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.027721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.027988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.028022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.028318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.028354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.028637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.028671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.028873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.028915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.029181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.029216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.029501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.029534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.029735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.341  [2024-12-10 00:13:38.029768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.341  qpair failed and we were unable to recover it.
00:32:22.341  [2024-12-10 00:13:38.029973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.030007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.030198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.030234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.030419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.030454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.030730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.030764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.031029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.031063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.031280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.031316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.031576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.031609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.031862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.031896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.032182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.032217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.032333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.032365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.032578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.032613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.032872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.032905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.033086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.033119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.033388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.033423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.033705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.033740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.034040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.034073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.034334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.034371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.034573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.034607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.034882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.034916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.035191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.035226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.035422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.035456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.035712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.035746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.035949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.035982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.036130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.036177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.036452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.036486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.036781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.036816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.037085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.037121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.037325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.037360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.037587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.037621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.037925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.037961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.038244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.038279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.038486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.342  [2024-12-10 00:13:38.038519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.342  qpair failed and we were unable to recover it.
00:32:22.342  [2024-12-10 00:13:38.038651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.038684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.038960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.038995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.039264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.039300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.039593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.039627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.039894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.039934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.040216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.040253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.040546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.040580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.040869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.040903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.041137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.041182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.041467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.041501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.041775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.041809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.042095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.042130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.042289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.042324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.042454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.042489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.042761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.042794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.043064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.043098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.043397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.043433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.043689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.043723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.044039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.044074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.044277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.044313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.044455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.044489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.044687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.044722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.044851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.044885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.045164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.045208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.045414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.045448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.045702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.045736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.045954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.045990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.046190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.046224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.046407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.046441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.046697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.046732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.046938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.046972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.343  [2024-12-10 00:13:38.047238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.343  [2024-12-10 00:13:38.047275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.343  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.047527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.047562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.047749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.047784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.047978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.048012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.048208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.048243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.048447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.048481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.048662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.048698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.048911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.048944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.049220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.049255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.049381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.049415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.049686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.049720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.049923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.049957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.050141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.050186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.050489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.050529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.050710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.050745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.050949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.050984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.051261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.051298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.051576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.051610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.051918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.051952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.052160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.052205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.052492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.052527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.052786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.052821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.053015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.053051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.053273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.053309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.053584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.053618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.053822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.053856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.054054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.054089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.344  [2024-12-10 00:13:38.054252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.344  [2024-12-10 00:13:38.054289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.344  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.054586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.054621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.054890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.054926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.055076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.055111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.055302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.055337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.055616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.055650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.055836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.055870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.056147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.056194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.056328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.056361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.056544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.056578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.056778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.056812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.056997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.057031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.057311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.057346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.057652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.057686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.057870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.057905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.058086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.058119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.058386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.058422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.058700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.058734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.059013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.059049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.059333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.059369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.059589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.059623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.059911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.059945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.060140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.060185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.060470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.060505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.060746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.060780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.061042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.061077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.061271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.061313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.061581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.061615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.061817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.061851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.062128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.062161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.062364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.062398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.062536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.062571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.345  qpair failed and we were unable to recover it.
00:32:22.345  [2024-12-10 00:13:38.062826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.345  [2024-12-10 00:13:38.062860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.063059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.063093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.063367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.063402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.063551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.063585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.063796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.063831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.064016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.064050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.064313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.064348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.064625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.064659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.064850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.064886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.065073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.065107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.065394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.065430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.065616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.065652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.065917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.065951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.066147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.066193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.066461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.066496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.066823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.066858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.067135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.067190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.067479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.067513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.067788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.067823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.068013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.068048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.068237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.068273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.068475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.068510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.068720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.068755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.068943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.068978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.069186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.069222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.069501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.069536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.069809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.069843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.070056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.070092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.070371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.070407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.070600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.070633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.070892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.070926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.071188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.346  [2024-12-10 00:13:38.071224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.346  qpair failed and we were unable to recover it.
00:32:22.346  [2024-12-10 00:13:38.071522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.071556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.071759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.071794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.072015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.072055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.072238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.072275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.072491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.072525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.072809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.072843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.073117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.073151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.073438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.073474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.073673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.073707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.073953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.073988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.074288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.074325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.074437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.074472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.074681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.074715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.074990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.075024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.075327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.075361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.075623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.075658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.075890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.075924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.076120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.076154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.076353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.076389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.076661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.076695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.076880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.076914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.077115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.077149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.077435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.077470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.077749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.077782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.078066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.078099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.078298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.078334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.078543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.078577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.078878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.078912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.079098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.079132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.079332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.079367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.079621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.079654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.079936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.347  [2024-12-10 00:13:38.079972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.347  qpair failed and we were unable to recover it.
00:32:22.347  [2024-12-10 00:13:38.080251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.080287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.080544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.080578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.080852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.080887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.081158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.081202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.081486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.081520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.081792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.081826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.082035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.082070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.082287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.082324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.082523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.082557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.082744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.082778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.083054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.083095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.083365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.083400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.083602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.083636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.083820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.083854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.084074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.084108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.084424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.084460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.084663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.084697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.084883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.084918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.085046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.085081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.085283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.085318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.085546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.085581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.085806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.085840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.086092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.086126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.086392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.086428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.086636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.086671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.086953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.086987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.087263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.087298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.087484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.087519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.087785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.087819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.088004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.088038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.088315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.088351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.088484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.348  [2024-12-10 00:13:38.088518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.348  qpair failed and we were unable to recover it.
00:32:22.348  [2024-12-10 00:13:38.088817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.088851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.089137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.089179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.089450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.089484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.089764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.089798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.090083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.090118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.090358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.090394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.090697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.090732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.090934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.090969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.091195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.091231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.091520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.091554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.091823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.091857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.092069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.092104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.092342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.092378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.092586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.092619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.092773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.092806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.092995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.093030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.093331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.093367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.093571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.093606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.093793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.093834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.094113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.094147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.094461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.094496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.094770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.094805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.095038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.095072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.095352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.095387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.095591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.095625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.095879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.095914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.096096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.096130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.096344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.096380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.096656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.096689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.096969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.097003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.097283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.097319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.349  [2024-12-10 00:13:38.097607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.349  [2024-12-10 00:13:38.097642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.349  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.097914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.097949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.098227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.098262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.098449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.098484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.098745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.098780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.098981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.099015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.099288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.099324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.099548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.099582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.099730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.099764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.099967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.100002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.100129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.100163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.100393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.100427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.100711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.100745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.100882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.100917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.101116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.101151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.101348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.101383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.101639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.101674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.101957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.101990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.102188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.102223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.102524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.102558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.102765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.102798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.103077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.103112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.103367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.103402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.103678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.103712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.103869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.103902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.104154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.104202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.104503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.104537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.104827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.104863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.350  qpair failed and we were unable to recover it.
00:32:22.350  [2024-12-10 00:13:38.105149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.350  [2024-12-10 00:13:38.105196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.105458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.105490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.105772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.105806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.106089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.106124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.106403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.106439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.106717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.106750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.107041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.107074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.107371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.107407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.107672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.107707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.107898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.107933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.108207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.108242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.108510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.108544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.108669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.108703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.108990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.109025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.109247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.109283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.109471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.109505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.109785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.109820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.110084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.110118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.110358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.110393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.110669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.110704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.110907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.110942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.111123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.111157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.111458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.111494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.111691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.111725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.111959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.111994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.112266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.112303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.112455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.112495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.112778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.112812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.113106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.113140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.113354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.113389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.113611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.113646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.113916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.113950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.351  qpair failed and we were unable to recover it.
00:32:22.351  [2024-12-10 00:13:38.114195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.351  [2024-12-10 00:13:38.114230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.114535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.114569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.114772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.114806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.115000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.115034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.115255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.115292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.115593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.115628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.115884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.115917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.116223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.116260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.116543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.116577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.116826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.116860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.117157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.117200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.117407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.117442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.117694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.117728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.118036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.118070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.118251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.118304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.118445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.118478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.118694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.118728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.118980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.119014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.119213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.119247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.119530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.119564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.119768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.119801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.120089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.120125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.120421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.120457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.120723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.120757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.121052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.121086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.121354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.121390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.121654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.121687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.121984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.122018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.122213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.122248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.122525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.122559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.122822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.122856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.352  [2024-12-10 00:13:38.123111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.352  [2024-12-10 00:13:38.123146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.352  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.123461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.123497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.123741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.123775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.123992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.124031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.124333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.124369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.124551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.124585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.124782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.124817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.124999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.125034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.125321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.125355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.125558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.125592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.125808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.125843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.126119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.126154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.126441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.126476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.126679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.126714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.126914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.126949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.127205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.127241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.127496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.127530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.127747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.127782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.128059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.128094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.128298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.128333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.128556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.128591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.128787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.128820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.129040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.129073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.129371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.129406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.129674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.129708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.129927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.129961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.130078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.130112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.130399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.130435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.130710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.130744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.131006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.131041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.131327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.131364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.131573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.131606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.353  qpair failed and we were unable to recover it.
00:32:22.353  [2024-12-10 00:13:38.131799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.353  [2024-12-10 00:13:38.131833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.132106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.132140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.132353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.132388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.132669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.132703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.132913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.132946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.133237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.133272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.133467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.133501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.133783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.133818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.134062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.134096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.134280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.134315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.134617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.134650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.134915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.134954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.135137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.135195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.135472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.135506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.135773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.135808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.135988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.136023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.136302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.136337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.136558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.136591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.136873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.136907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.137091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.137125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.137342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.137376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.137645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.137679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.137863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.137897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.138032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.138065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.138362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.138397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.138681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.138714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.138929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.138962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.139177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.139212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.139369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.139402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.139657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.139691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.139994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.354  [2024-12-10 00:13:38.140029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.354  qpair failed and we were unable to recover it.
00:32:22.354  [2024-12-10 00:13:38.140295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.140332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.140584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.140619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.140897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.140932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.141218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.141254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.141472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.141504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.141756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.141791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.142092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.142127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.142338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.142374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.142645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.142680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.142865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.142899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.143082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.143115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.143394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.143430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.143693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.143726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.143940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.143975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.144223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.144259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.144515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.144548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.144772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.144807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.145085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.145120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.145423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.145458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.145684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.145717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.145989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.146030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.146144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.146186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.146389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.146423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.146610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.146645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.146922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.146956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.147096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.147131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.147429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.147465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.147726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.147760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.355  qpair failed and we were unable to recover it.
00:32:22.355  [2024-12-10 00:13:38.147986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.355  [2024-12-10 00:13:38.148019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.148211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.148247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.148442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.148476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.148750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.148783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.149029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.149065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.149366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.149402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.149689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.149723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.150000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.150035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.150236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.150270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.150494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.150528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.150744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.150778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.150920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.150954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.151066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.151099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.151375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.151409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.151523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.151556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.151778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.151812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.152006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.152039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.152220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.152256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.152451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.152485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.152769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.152803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.153051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.153087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.153289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.153325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.153594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.153628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.153830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.153863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.154040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.154074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.154326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.154361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.154552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.154586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.154783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.154818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.356  [2024-12-10 00:13:38.155021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.356  [2024-12-10 00:13:38.155055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.356  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.155275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.635  [2024-12-10 00:13:38.155311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.635  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.155562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.635  [2024-12-10 00:13:38.155597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.635  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.155785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.635  [2024-12-10 00:13:38.155818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.635  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.156028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.635  [2024-12-10 00:13:38.156068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.635  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.156342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.635  [2024-12-10 00:13:38.156378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.635  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.156656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.635  [2024-12-10 00:13:38.156690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.635  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.156974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.635  [2024-12-10 00:13:38.157008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.635  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.157290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.635  [2024-12-10 00:13:38.157326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.635  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.157604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.635  [2024-12-10 00:13:38.157638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.635  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.157924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.635  [2024-12-10 00:13:38.157957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.635  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.158142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.635  [2024-12-10 00:13:38.158185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.635  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.158451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.635  [2024-12-10 00:13:38.158484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.635  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.158756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.635  [2024-12-10 00:13:38.158791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.635  qpair failed and we were unable to recover it.
00:32:22.635  [2024-12-10 00:13:38.158997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.159031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.159266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.159302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.159580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.159613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.159747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.159781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.160043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.160078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.160279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.160314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.160593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.160626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.160763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.160796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.161093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.161128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.161340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.161375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.161589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.161623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.161821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.161855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.162138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.162184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.162475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.162509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.162704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.162739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.162991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.163025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.163330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.163367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.163587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.163622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.163871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.163906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.164153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.164202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.164428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.164463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.164665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.164700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.164902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.164937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.165135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.165181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.165456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.165490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.165766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.165800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.166000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.166035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.166294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.166328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.166516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.166549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.166827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.166861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.167041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.167080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.167345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.636  [2024-12-10 00:13:38.167381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.636  qpair failed and we were unable to recover it.
00:32:22.636  [2024-12-10 00:13:38.167588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.167622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.167878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.167912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.168179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.168214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.168418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.168454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.168587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.168621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.168816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.168850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.169128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.169162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.169471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.169505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.169708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.169742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.169929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.169962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.170230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.170266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.170572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.170607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.170894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.170928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.171200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.171236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.171450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.171484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.171668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.171701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.171951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.171986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.172177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.172212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.172491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.172525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.172828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.172862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.173061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.173095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.173395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.173430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.173690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.173723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.637  qpair failed and we were unable to recover it.
00:32:22.637  [2024-12-10 00:13:38.173901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.637  [2024-12-10 00:13:38.173935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.174204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.174240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.174529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.174563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.174837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.174872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.175072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.175105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.175432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.175468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.175654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.175686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.175983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.176019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.176210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.176245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.176523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.176557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.176811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.176846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.177043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.177078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.177329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.177365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.177620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.177653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.177871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.177904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.178093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.178134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.178423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.178459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.178739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.178773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.179050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.179084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.179211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.179248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.179443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.179477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.179753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.179787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.180068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.180103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.180386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.180421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.180614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.180648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.180850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.180884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.181089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.181124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.181321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.181358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.181544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.181577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.181804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.181839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.182036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.182070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.638  qpair failed and we were unable to recover it.
00:32:22.638  [2024-12-10 00:13:38.182271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.638  [2024-12-10 00:13:38.182305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.182577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.182611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.182823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.182858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.183161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.183208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.183414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.183449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.183655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.183688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.183937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.183971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.184188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.184223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.184412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.184445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.184653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.184687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.184889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.184924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.185179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.185215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.185415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.185450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.185580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.185614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.185835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.185870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.186090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.186123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.186337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.186373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.186664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.186698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.186825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.186859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.187139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.187193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.187399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.187431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.187655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.187690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.187964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.187998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.188135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.188181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.188408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.188447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.188647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.188682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.188984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.189017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.189213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.189249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.189504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.189539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.189795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.189831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.190024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.190058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.190310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.639  [2024-12-10 00:13:38.190345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.639  qpair failed and we were unable to recover it.
00:32:22.639  [2024-12-10 00:13:38.190619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.190653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.190856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.190890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.191143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.191200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.191484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.191518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.191795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.191829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.192111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.192144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.192428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.192463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.192743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.192778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.192969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.193003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.193218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.193254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.193536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.193571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.193788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.193822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.194043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.194078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.194380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.194416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.194614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.194648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.194931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.194965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.195219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.195254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.195483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.195516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.195695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.195730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.196047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.196083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.196351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.196387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.196640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.196675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.196892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.196926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.197191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.197225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.197520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.197554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.197816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.197849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.198070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.198105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.198372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.640  [2024-12-10 00:13:38.198407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.640  qpair failed and we were unable to recover it.
00:32:22.640  [2024-12-10 00:13:38.198634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.198668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.198881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.198915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.199189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.199225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.199410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.199443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.199712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.199752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.200024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.200059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.200261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.200297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.200479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.200512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.200721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.200755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.200979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.201012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.201230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.201265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.201518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.201552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.201808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.201842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.202140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.202187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.202399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.202433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.202638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.202673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.202892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.202927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.203194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.203230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.203384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.203418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.203554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.203588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.203768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.203802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.204002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.204037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.204303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.204339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.204563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.204596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.204835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.204870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.205073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.205107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.641  qpair failed and we were unable to recover it.
00:32:22.641  [2024-12-10 00:13:38.205326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.641  [2024-12-10 00:13:38.205362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.205637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.205671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.205795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.205830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.206008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.206041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.206246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.206282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.206493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.206528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.206710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.206745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.206944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.206979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.207258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.207293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.207618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.207652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.207923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.207957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.208161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.208206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.208413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.208448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.208628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.208663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.208933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.208967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.209239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.209278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.209565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.209600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.209825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.209859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.210112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.210153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.210456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.210491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.210740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.210775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.211087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.211121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.211422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.211458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.211721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.211756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.212054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.212089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.212274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.212310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.212504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.212539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.212740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.642  [2024-12-10 00:13:38.212775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.642  qpair failed and we were unable to recover it.
00:32:22.642  [2024-12-10 00:13:38.213074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.213110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.213269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.213306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.213584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.213619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.213747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.213781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.214096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.214133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.214356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.214392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.214517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.214551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.214868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.214902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.215035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.215069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.215206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.215241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.215421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.215455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.215644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.215680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.215961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.215995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.216201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.216237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.216446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.216480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.216747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.216783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.216984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.217019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.217250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.217286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.217400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.217436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.217707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.217742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.218002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.218038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.218335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.218372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.218521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.218556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.218757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.218791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.219094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.219128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.219301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.219338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.219603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.643  [2024-12-10 00:13:38.219637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.643  qpair failed and we were unable to recover it.
00:32:22.643  [2024-12-10 00:13:38.219776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.219811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.219993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.220027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.220233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.220268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.220469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.220512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.220786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.220820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.221072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.221107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.221418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.221456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.221690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.221724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.221992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.222028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.222244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.222291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.222546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.222582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.222874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.222909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.223113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.223148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.223340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.223375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.223562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.223596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.223875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.223909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.224183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.224219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.224422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.224457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.224659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.224696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.224888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.224925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.225198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.225235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.225478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.225515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.225778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.225812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.226108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.226143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.226397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.226432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.226693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.226727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.226980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.227014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.227273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.644  [2024-12-10 00:13:38.227309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.644  qpair failed and we were unable to recover it.
00:32:22.644  [2024-12-10 00:13:38.227534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.227570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.227761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.227798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.227956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.227991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.228184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.228220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.228426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.228461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.228679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.228713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.228996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.229032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.229219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.229255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.229464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.229500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.229758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.229792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.229947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.229981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.230191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.230227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.230418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.230452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.230704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.230738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.231044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.231078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.231340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.231384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.231566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.231603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.231861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.231898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.232081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.232118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.232355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.232391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.232643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.232686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.232943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.232979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.233234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.233270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.233386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.233420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.233635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.233670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.233921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.233958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.234219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.234254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.234506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.234541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.645  qpair failed and we were unable to recover it.
00:32:22.645  [2024-12-10 00:13:38.234844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.645  [2024-12-10 00:13:38.234880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.235191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.235228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.235414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.235448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.235704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.235739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.235922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.235957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.236237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.236274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.236538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.236573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.236767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.236801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.237067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.237101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.237306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.237345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.237495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.237528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.237802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.237836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.237955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.237990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.238218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.238254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.238482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.238517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.238653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.238688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.238892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.238928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.239113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.239147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.239436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.239472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.239656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.239691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.239873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.239907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.240107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.240144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.240439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.240474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.240678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.240714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.240991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.241025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.241296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.241331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.241595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.241630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.241826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.241861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.242151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.242198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.646  qpair failed and we were unable to recover it.
00:32:22.646  [2024-12-10 00:13:38.242347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.646  [2024-12-10 00:13:38.242382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.242495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.242530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.242661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.242696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.242821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.242858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.243061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.243096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.243301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.243338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.243535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.243570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.243756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.243792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.243926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.243960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.244218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.244253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.244379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.244413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.244526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.244559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.244749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.244784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.245035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.245070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.245293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.245329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.245603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.245637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.245763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.245797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.246051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.246085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.246296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.246331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.246463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.246497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.246682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.246717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.246999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.247033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.247236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.247271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.247472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.247506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.247806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.647  [2024-12-10 00:13:38.247841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.647  qpair failed and we were unable to recover it.
00:32:22.647  [2024-12-10 00:13:38.248038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.248083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.248298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.248334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.248611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.248646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.248780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.248814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.248958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.248993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.249120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.249154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.249347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.249382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.249583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.249617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.249825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.249859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.249997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.250033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.250149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.250194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.250381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.250415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.250618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.250652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.250880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.250914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.251045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.251080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.251212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.251247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.251503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.251538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.251743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.251777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.251893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.251928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.252066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.252101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.252288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.252324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.252615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.252651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.252852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.252887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.253022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.253056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.253359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.253395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.253686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.253722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.253943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.253978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.254193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.254230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.254443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.254479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.254667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.254703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.648  qpair failed and we were unable to recover it.
00:32:22.648  [2024-12-10 00:13:38.254985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.648  [2024-12-10 00:13:38.255019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.255213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.255250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.255381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.255414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.255641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.255675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.255882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.255917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.256121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.256155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.256383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.256420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.256697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.256733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.257028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.257063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.257276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.257313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.257540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.257581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.257796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.257830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.258108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.258145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.258455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.258489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.258734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.258769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.258907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.258941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.259150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.259196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.259452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.259487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.259689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.259723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.259993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.260027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.260236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.260272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.260463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.260497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.260800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.260835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.261034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.261069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.261354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.261391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.261582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.261617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.261729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.261765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.261985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.262020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.262320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.262356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.649  [2024-12-10 00:13:38.262639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.649  [2024-12-10 00:13:38.262674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.649  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.262953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.262988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.263247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.263286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.263510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.263545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.263695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.263729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.263987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.264023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.264277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.264313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.264564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.264599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.264792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.264826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.265026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.265061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.265338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.265374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.265656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.265691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.265971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.266006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.266292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.266327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.266602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.266637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.266840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.266874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.267136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.267182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.267363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.267397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.267652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.267686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.267879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.267914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.268103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.268139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.268350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.268391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.268657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.268692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.268887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.268921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.269119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.269154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.269420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.269455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.269594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.269629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.269903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.269939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.270240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.270275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.270532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.270566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.270820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.270854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.650  qpair failed and we were unable to recover it.
00:32:22.650  [2024-12-10 00:13:38.271037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.650  [2024-12-10 00:13:38.271072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.271257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.271294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.271484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.271519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.271701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.271736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.271927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.271961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.272202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.272239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.272375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.272411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.272528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.272563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.272760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.272797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.273004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.273039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.273226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.273261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.273465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.273499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.273772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.273807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.274087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.274122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.274327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.274362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.274659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.274695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.274881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.274915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.275111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.275145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.275413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.275449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.275718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.275753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.275958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.275993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.276203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.276239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.276371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.276405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.276674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.276709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.276912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.276947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.277085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.277120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.277358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.277394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.277578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.277613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.277799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.277834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.278090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.278125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.278325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.278368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.278552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.278586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.651  qpair failed and we were unable to recover it.
00:32:22.651  [2024-12-10 00:13:38.278789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.651  [2024-12-10 00:13:38.278825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.279017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.279050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.279262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.279298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.279502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.279538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.279726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.279760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.279899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.279935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.280122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.280157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.280364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.280398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.280621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.280656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.280936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.280971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.281097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.281132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.281403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.281438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.281556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.281590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.281776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.281812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.282086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.282121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.282352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.282388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.282601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.282636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.282919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.282954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.283213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.283248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.283515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.283551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.283746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.283780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.283961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.283996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.284250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.284287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.284489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.284523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.284774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.284812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.285018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.285054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.285334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.285369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.285591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.285627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.285752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.285789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.285972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.286006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.286209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.286244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.286450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.286485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.652  qpair failed and we were unable to recover it.
00:32:22.652  [2024-12-10 00:13:38.286598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.652  [2024-12-10 00:13:38.286632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.286816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.286850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.287034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.287068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.287209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.287245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.287457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.287491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.287605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.287641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.287826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.287869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.287984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.288017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.288223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.288259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.288585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.288620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.288750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.288784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.288968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.289003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.289202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.289237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.289484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.289518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.289716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.289750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.290004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.290039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.290164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.290209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.290397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.290434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.290689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.290723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.290910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.290944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.291132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.291191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.291452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.291486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.291620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.291653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.291879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.291913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.292046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.292080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.292365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.292402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.292586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.292623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.292825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.292861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.653  [2024-12-10 00:13:38.292988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.653  [2024-12-10 00:13:38.293023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.653  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.293227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.293263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.293378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.293413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.293694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.293729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.293927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.293963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.294153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.294200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.294325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.294358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.294654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.294689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.294885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.294919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.295050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.295084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.295265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.295301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.295490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.295525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.295842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.295877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.296163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.296212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.296430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.296466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.296664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.296698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.296822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.296856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.297041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.297075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.297219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.297264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.297468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.297501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.297639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.297672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.297935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.297969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.298182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.298217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.298403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.298436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.298654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.298688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.298878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.298914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.299100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.299134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.299432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.299468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.299682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.299716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.300047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.300082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.300267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.300304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.300427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.300464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.654  [2024-12-10 00:13:38.300650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.654  [2024-12-10 00:13:38.300684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.654  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.300886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.300921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.301182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.301217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.301418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.301452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.301646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.301680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.301950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.301985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.302178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.302213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.302425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.302460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.302654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.302689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.302882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.302916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.303110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.303146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.303349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.303383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.303646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.303683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.303894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.303928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.304121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.304154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.304285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.304319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.304593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.304627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.304811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.304845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.305094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.305129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.305340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.305376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.305559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.305594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.305873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.305908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.306036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.306073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.306348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.306386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.306591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.306626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.306826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.306861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.307056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.307096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.307377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.307414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.307614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.307650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.307852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.307886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.308159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.308225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.308417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.308452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.655  [2024-12-10 00:13:38.308647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.655  [2024-12-10 00:13:38.308681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.655  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.308959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.308994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.309294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.309331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.309590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.309624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.309803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.309839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.310114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.310150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.310350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.310385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.310590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.310626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.310836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.310872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.311145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.311192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.311470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.311504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.311775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.311810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.311996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.312031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.312297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.312333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.312519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.312552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.312825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.312862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.313130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.313163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.313367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.313402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.313682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.313717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.313848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.313883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.314085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.314120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.314409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.314491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.314716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.314755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.314960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.314996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.315194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.315231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.315441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.315478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.315775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.315811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.316008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.316043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.316327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.316363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.316642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.316677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.316878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.316913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.656  [2024-12-10 00:13:38.317044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.656  [2024-12-10 00:13:38.317080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.656  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.317336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.317374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.317674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.317709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.317836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.317882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.318067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.318101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.318308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.318343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.318618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.318653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.318961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.318996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.319183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.319219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.319416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.319451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.319656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.319692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.319895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.319929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.320189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.320225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.320421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.320457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.320636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.320670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.320947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.320982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.321256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.321292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.321555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.321590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.321886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.321921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.322131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.322174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.322449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.322484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.322667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.322702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.322847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.322882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.323164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.323209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.323412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.323446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.323732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.323766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.323972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.324007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.324240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.324277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.324476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.324511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.324787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.324822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.324953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.324988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.325243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.325280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.657  [2024-12-10 00:13:38.325558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.657  [2024-12-10 00:13:38.325593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.657  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.325785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.325820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.326034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.326068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.326293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.326346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.326535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.326569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.326757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.326792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.326995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.327030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.327339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.327375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.327657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.327692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.327899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.327934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.328208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.328245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.328536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.328578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.328860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.328894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.329183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.329218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.329419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.329453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.329655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.329689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.329875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.329910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.330110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.330144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.330354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.330389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.330664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.330700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.330959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.330994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.331213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.331249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.331446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.331482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.331736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.331771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.332074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.332108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.332398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.332434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.332616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.332651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.332775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.332809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.333094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.658  [2024-12-10 00:13:38.333129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.658  qpair failed and we were unable to recover it.
00:32:22.658  [2024-12-10 00:13:38.333348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.333384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.333603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.333637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.333934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.333968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.334093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.334128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.334419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.334453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.334741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.334775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.335050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.335085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.335302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.335340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.335616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.335650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.335849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.335884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.336138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.336191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.336476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.336511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.336725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.336759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.336873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.336907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.337108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.337143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.337345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.337381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.337569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.337604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.337865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.337919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.338154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.338219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.338369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.338415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.338645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.338686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.338885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.338920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.339224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.339269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.339498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.339533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.339739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.339774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.339985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.340019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.340277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.340314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.340588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.340623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.340903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.340937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.341190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.341226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.341553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.341587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.341851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.659  [2024-12-10 00:13:38.341886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.659  qpair failed and we were unable to recover it.
00:32:22.659  [2024-12-10 00:13:38.342140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.342186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.342374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.342408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.342599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.342634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.342840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.342874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.343157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.343214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.343514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.343549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.343757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.343792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.344045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.344080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.344389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.344425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.344697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.344732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.345016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.345051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.345238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.345274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.345406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.345441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.345665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.345699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.345978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.346013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.346222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.346258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.346540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.346573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.346790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.346825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.347109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.347144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.347346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.347381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.347523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.347558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.347811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.347845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.348035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.348070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.348287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.348321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.348601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.348635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.348915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.348950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.349234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.349270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.349548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.349582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.349881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.349916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.350193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.350230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.350516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.350564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.350752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.660  [2024-12-10 00:13:38.350787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.660  qpair failed and we were unable to recover it.
00:32:22.660  [2024-12-10 00:13:38.351066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.351100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.351321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.351357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.351612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.351646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.351896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.351931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.352237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.352274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.352529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.352564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.352868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.352902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.353163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.353211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.353494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.353529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.353722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.353757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.354034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.354068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.354336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.354371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.354662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.354697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.354971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.355005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.355215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.355252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.355442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.355476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.355727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.355762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.355943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.355976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.356191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.356227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.356471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.356506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.356690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.356724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.357028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.357062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.357328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.357364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.357581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.357616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.357801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.357836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.358113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.358148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.358366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.358400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.358665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.358700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.358881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.358916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.359191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.359226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.359436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.359471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.359722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.661  [2024-12-10 00:13:38.359756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.661  qpair failed and we were unable to recover it.
00:32:22.661  [2024-12-10 00:13:38.360010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.360043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.360301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.360337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.360484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.360518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.360794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.360829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.361023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.361073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.361382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.361434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.361743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.361804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.362099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.362134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.362432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.362470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.362771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.362805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.363082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.363117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.363277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.363313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.363565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.363599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.363902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.363937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.364195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.364233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.364441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.364477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.364687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.364721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.365019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.365053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.365241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.365278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.365532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.365567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.365848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.365883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.366086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.366120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.366343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.366377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.366678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.366713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.366974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.367008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.367245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.367282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.367510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.367546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.367797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.367830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.368112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.368146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.368427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.368464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.368708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.368742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.369015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.662  [2024-12-10 00:13:38.369049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.662  qpair failed and we were unable to recover it.
00:32:22.662  [2024-12-10 00:13:38.369267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.369305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.369522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.369558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.369856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.369890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.370156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.370225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.370498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.370533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.370806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.370841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.371038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.371073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.371356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.371393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.371671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.371705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.371988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.372022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.372278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.372315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.372543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.372577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.372818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.372853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.373129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.373183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.373393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.373428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.373695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.373730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.374006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.374041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.374335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.374372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.374572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.374607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.374861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.374895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.375050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.375084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.375385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.375422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.375690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.375724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.376004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.376039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.376323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.376360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.376637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.376672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.376867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.376901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.377122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.377158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.663  [2024-12-10 00:13:38.377467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.663  [2024-12-10 00:13:38.377502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.663  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.377624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.377659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.377799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.377834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.378109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.378144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.378380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.378416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.378691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.378725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.378836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.378871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.379145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.379198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.379460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.379494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.379782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.379817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.380096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.380130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.380413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.380449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.380730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.380764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.380913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.380953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.381069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.381103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.381396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.381432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.381636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.381671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.381864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.381899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.382195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.382232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.382518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.382553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.382826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.382860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.383052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.383086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.383353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.383390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.383590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.383625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.383839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.383874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.384076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.384110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.384400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.384435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.384708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.384742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.385033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.385074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.385265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.385302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.385556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.385590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.385781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.385816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.664  [2024-12-10 00:13:38.386074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.664  [2024-12-10 00:13:38.386109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.664  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.386406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.386442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.386741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.386775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.386979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.387013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.387205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.387241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.387425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.387460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.387734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.387768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.388091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.388126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.388406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.388443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.388640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.388675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.388868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.388903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.389188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.389224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.389435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.389470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.389697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.389731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.390007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.390041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.390331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.390367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.390554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.390588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.390851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.390886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.391038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.391073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.391328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.391364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.391661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.391695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.391958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.391999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.392192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.392227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.392410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.392444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.392646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.392681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.392895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.392929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.393190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.393227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.393424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.393459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.393757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.393792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.394055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.394091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.394390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.394428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.394650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.394685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.394941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.394977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.395229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.665  [2024-12-10 00:13:38.395265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.665  qpair failed and we were unable to recover it.
00:32:22.665  [2024-12-10 00:13:38.395566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.395600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.395899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.395936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.396206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.396241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.396530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.396565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.396783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.396817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.397028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.397063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.397217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.397253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.397456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.397491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.397699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.397733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.397932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.397966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.398220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.398257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.398531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.398565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.398843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.398878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.399078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.399113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.399318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.399353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.399630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.399664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.399845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.399880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.400148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.400194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.400400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.400434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.400570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.400606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.400784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.400819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.401082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.401117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.401405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.401441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.401624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.401659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.401938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.401973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.402184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.402221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.402473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.402507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.402778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.402818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.403068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.403103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.403368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.403404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.403656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.403692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.403851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.403885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.404181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.404221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.404500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.404535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.404794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.666  [2024-12-10 00:13:38.404829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.666  qpair failed and we were unable to recover it.
00:32:22.666  [2024-12-10 00:13:38.405128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.405163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.405438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.405472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.405758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.405792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.406066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.406101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.406385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.406420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.406700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.406734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.406955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.406990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.407265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.407301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.407581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.407614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.407823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.407857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.408137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.408181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.408450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.408484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.408640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.408675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.408975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.409010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.409277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.409312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.409592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.409626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.409907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.409942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.410129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.410164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.410386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.410420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.410653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.410688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.410869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.410902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.411191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.411226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.411429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.411465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.411718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.411753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.411954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.411989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.412191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.412228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.412482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.412516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.412712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.412745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.413016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.413051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.413257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.413293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.413478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.413513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.413713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.413748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.414001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.414042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.414183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.414219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.414491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.414525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.414745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.667  [2024-12-10 00:13:38.414780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.667  qpair failed and we were unable to recover it.
00:32:22.667  [2024-12-10 00:13:38.415032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.415068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.415328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.415363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.415578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.415612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.415902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.415937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.416210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.416246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.416442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.416476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.416587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.416621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.416894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.416928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.417222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.417259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.417530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.417565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.417705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.417740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.417965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.418000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.418279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.418314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.418570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.418604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.418834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.418869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.419139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.419185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.419335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.419371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.419596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.419631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.419833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.419868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.420002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.420037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.420313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.420349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.420615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.420649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.420944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.420979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.421250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.421288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.421495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.421530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.421786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.421821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.422006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.422041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.422323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.422358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.422622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.422656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.422882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.422916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.423190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.423225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.423419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.423454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.423712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.423748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.424040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.668  [2024-12-10 00:13:38.424075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.668  qpair failed and we were unable to recover it.
00:32:22.668  [2024-12-10 00:13:38.424282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.424318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.424601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.424635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.424912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.424954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.425234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.425270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.425477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.425511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.425785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.425819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.426005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.426039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.426223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.426259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.426517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.426553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.426758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.426803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.427096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.427132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.427412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.427447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.427746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.427781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.428010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.428044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.428304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.428340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.428639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.428673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.428936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.428971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.429230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.429266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.429569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.429603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.429890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.429925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.430203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.430237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.430458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.430493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.430608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.430644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.430918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.430953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.431187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.431223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.431370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.431404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.431593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.431627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.431768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.431802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.432000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.432034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.432299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.432335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.432622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.432657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.432930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.432965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.433251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.433286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.433564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.433598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.433784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.433818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.434082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.434116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.434395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.669  [2024-12-10 00:13:38.434431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.669  qpair failed and we were unable to recover it.
00:32:22.669  [2024-12-10 00:13:38.434734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.434769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.435052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.435087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.435226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.435262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.435515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.435549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.435672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.435708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.436001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.436041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.436323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.436358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.436546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.436581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.436705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.436740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.436954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.436989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.437125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.437160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.437400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.437435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.437642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.437676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.437954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.437988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.438241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.438277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.438557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.438591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.438876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.438911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.439134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.439178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.439364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.439398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.439585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.439620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.439874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.439909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.440189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.440225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.440505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.440541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.440859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.440894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.441096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.441131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.441357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.441392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.441670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.441704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.441908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.441943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.442194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.442229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.442448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.442485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.442758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.442792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.443078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.443112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.443346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.443382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.443540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.443575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.443779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.443813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.444014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.444048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.444327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.444364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.670  [2024-12-10 00:13:38.444665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.670  [2024-12-10 00:13:38.444700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.670  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.444883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.444917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.445099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.445133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.445400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.445436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.445617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.445651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.445941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.445975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.446232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.446268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.446473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.446508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.446775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.446815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.447016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.447051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.447332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.447367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.447642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.447676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.447861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.447896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.448164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.448208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.448490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.448524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.448661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.448695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.448969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.449005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.449258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.449294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.449549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.449584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.449802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.449836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.450094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.450130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.450410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.450447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.450653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.450688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.450891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.450926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.451132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.451179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.451461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.451496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.451762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.451797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.452089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.452124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.452417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.452452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.452659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.452693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.452993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.453027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.453212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.453248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.453394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.453428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.453682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.453715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.453919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.453954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.454148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.454193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.454380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.671  [2024-12-10 00:13:38.454415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.671  qpair failed and we were unable to recover it.
00:32:22.671  [2024-12-10 00:13:38.454619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.454653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.454857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.454892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.455163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.455208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.455397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.455430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.455691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.455725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.455910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.455944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.456223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.456258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.456522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.456557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.456854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.456893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.457190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.457224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3250200 Killed                  "${NVMF_APP[@]}" "$@"
00:32:22.672  [2024-12-10 00:13:38.457488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.457522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.457780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.457815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2
00:32:22.672  [2024-12-10 00:13:38.458092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.458128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.458445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.458482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.458711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.458745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:32:22.672  [2024-12-10 00:13:38.459024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.459058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable
00:32:22.672  [2024-12-10 00:13:38.459268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.459303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:22.672  [2024-12-10 00:13:38.459564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.459599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.459753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.459787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.460062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.460096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.460374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.460410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.460693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.460727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.461007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.461043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.461268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.461305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.461465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.461500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.461781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.461816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.462098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.462131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.462266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.462302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.462487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.462522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.462741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.462776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.462916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.462950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.463155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.463202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.463355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.463389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.463692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.463728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.463928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.463962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.464231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.464269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.464466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.672  [2024-12-10 00:13:38.464498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.672  qpair failed and we were unable to recover it.
00:32:22.672  [2024-12-10 00:13:38.464717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.464752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.464943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.464978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.465188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.465224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.465353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.465387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.465585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.465619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.465893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.465929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.466209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.466245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3250957
00:32:22.673  [2024-12-10 00:13:38.466403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.466439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.466626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.466660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3250957
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0
00:32:22.673  [2024-12-10 00:13:38.466867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.466902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.467013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.467054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3250957 ']'
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.467213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.467249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:22.673  [2024-12-10 00:13:38.467450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.467485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:22.673  [2024-12-10 00:13:38.467742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.467777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:22.673  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:22.673  [2024-12-10 00:13:38.468075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.468110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:22.673  [2024-12-10 00:13:38.468438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.468473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:22.673  [2024-12-10 00:13:38.468752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.468788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.468920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.468955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.469187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.469222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.469479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.469514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.469723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.469762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.469987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.470020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.470151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.470200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.470404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.470442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.470627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.470664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.470922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.470957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.471231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.471266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.471390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.471426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.471643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.471678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.471842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.471876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.472161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.472209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.472410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.673  [2024-12-10 00:13:38.472444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.673  qpair failed and we were unable to recover it.
00:32:22.673  [2024-12-10 00:13:38.472634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.674  [2024-12-10 00:13:38.472669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.674  qpair failed and we were unable to recover it.
00:32:22.674  [2024-12-10 00:13:38.472961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.674  [2024-12-10 00:13:38.472995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.674  qpair failed and we were unable to recover it.
00:32:22.674  [2024-12-10 00:13:38.473303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.674  [2024-12-10 00:13:38.473341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.674  qpair failed and we were unable to recover it.
00:32:22.674  [2024-12-10 00:13:38.473629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.674  [2024-12-10 00:13:38.473664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.674  qpair failed and we were unable to recover it.
00:32:22.674  [2024-12-10 00:13:38.473886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.674  [2024-12-10 00:13:38.473921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.674  qpair failed and we were unable to recover it.
00:32:22.674  [2024-12-10 00:13:38.474215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.674  [2024-12-10 00:13:38.474250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.674  qpair failed and we were unable to recover it.
00:32:22.962  [2024-12-10 00:13:38.474477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.962  [2024-12-10 00:13:38.474511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.962  qpair failed and we were unable to recover it.
00:32:22.962  [2024-12-10 00:13:38.474806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.962  [2024-12-10 00:13:38.474842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.962  qpair failed and we were unable to recover it.
00:32:22.962  [2024-12-10 00:13:38.475081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.962  [2024-12-10 00:13:38.475115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.962  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.475425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.475462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.475659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.475693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.475897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.475931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.476118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.476153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.476378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.476419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.476575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.476610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.476833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.476867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.476991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.477027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.477234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.477270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.477467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.477501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.477646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.477680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.477977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.478011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.478230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.478266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.478458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.478492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.478634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.478668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.478912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.478947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.479227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.479263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.479568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.479603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.479844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.479880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.480088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.480127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.480363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.480398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.480546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.480580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.480802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.480837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.481078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.481115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.481321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.481356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.481564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.481600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.481730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.481765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.482039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.482073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.482327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.482362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.482513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.482548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.963  qpair failed and we were unable to recover it.
00:32:22.963  [2024-12-10 00:13:38.482840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.963  [2024-12-10 00:13:38.482875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.483065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.483099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.483303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.483340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.483496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.483531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.483718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.483753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.484017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.484050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.484287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.484322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.484598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.484633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.484844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.484878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.485144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.485192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.485395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.485429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.485634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.485673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.485993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.486027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.486329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.486365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.486552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.486588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.486734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.486768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.486912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.486949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.487218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.487255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.487406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.487441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.487638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.487673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.487880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.487916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.488203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.488239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.488384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.488419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.488602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.488636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.488907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.488942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.489082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.489115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.489425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.489461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.489683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.489718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.489925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.489959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.490105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.490146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.490369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.490404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.490532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.964  [2024-12-10 00:13:38.490567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.964  qpair failed and we were unable to recover it.
00:32:22.964  [2024-12-10 00:13:38.490696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.490731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.490936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.490971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.491126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.491159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.491340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.491375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.491607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.491644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.491864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.491898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.492101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.492135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.492302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.492339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.492474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.492508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.492713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.492748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.492973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.493006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.493216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.493253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.493405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.493440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.493575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.493610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.493796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.493832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.494130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.494180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.494387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.494422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.494612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.494647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.494999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.495034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.495252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.495288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.495496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.495531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.495749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.495783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.496032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.496067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.496353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.496389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.496540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.496575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.496727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.496762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.496961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.496996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.497255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.497292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.497434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.497469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.497682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.497716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.497932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.497966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.965  [2024-12-10 00:13:38.498227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.965  [2024-12-10 00:13:38.498263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.965  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.498448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.498482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.498682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.498716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.498854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.498889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.499120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.499154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.499324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.499359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.499582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.499623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.499845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.499879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.500125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.500159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.500331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.500367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.500570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.500605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.500919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.500955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.501152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.501202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.501364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.501399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.501537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.501571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.501688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.501723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.501915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.501952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.502080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.502115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.502305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.502343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.502573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.502608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.502822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.502858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.502990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.503024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.503227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.503263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.503457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.503491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.503633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.503668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.503896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.503934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.504079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.504115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.504339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.504375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.504503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.504537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.504769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.504806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.504948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.504984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.505104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.505139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.966  qpair failed and we were unable to recover it.
00:32:22.966  [2024-12-10 00:13:38.505287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.966  [2024-12-10 00:13:38.505322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.505536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.505619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.505841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb570f0 is same with the state(6) to be set
00:32:22.967  [2024-12-10 00:13:38.506134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.506229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.506432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.506512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.506747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.506786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.506910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.506943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.507065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.507100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.507389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.507426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.507563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.507597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.507786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.507820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.507965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.508001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.508127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.508162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.508363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.508399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.508515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.508549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.508684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.508718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.508907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.508942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.509077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.509113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.509256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.509292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.509436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.509471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.509587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.509622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.509817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.509851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.509962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.509996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.510105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.510140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.510340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.510375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.510577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.510612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.510740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.510773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.511001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.511035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.511239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.511280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.511431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.511465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.511589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.511623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.511733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.511768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.511898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.967  [2024-12-10 00:13:38.511931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.967  qpair failed and we were unable to recover it.
00:32:22.967  [2024-12-10 00:13:38.512042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.512076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.512204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.512240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.512360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.512393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.512582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.512616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.512827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.512862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.512982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.513016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.513215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.513253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.513536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.513571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.513696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.513729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.513957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.513991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.514252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.514288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.514488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.514529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.514673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.514708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.514854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.514890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.515071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.515104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.515321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.515356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.515574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.515608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.515797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.515830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.515957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.515991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.516191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.516227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.516349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.516382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.516520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.516554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.516698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.516733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.517683] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:32:22.968  [2024-12-10 00:13:38.517732] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:32:22.968  [2024-12-10 00:13:38.518694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.518756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.519064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.519100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.968  [2024-12-10 00:13:38.519300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.968  [2024-12-10 00:13:38.519337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.968  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.519540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.519576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.519775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.519808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.519997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.520032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.520222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.520257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.520545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.520580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.520856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.520890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.521004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.521040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.521164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.521212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.521372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.521417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.521571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.521608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.521752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.521786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.521991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.522026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.522147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.522193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.522316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.522353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.522579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.522614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.522808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.522842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.522968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.523002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.523145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.523192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.523476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.523510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.523642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.523679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.523802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.523836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.524025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.524069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.524200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.524236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.524372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.524406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.524716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.524750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.524879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.524914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.525175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.525210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.525415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.525450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.525675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.525710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.525831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.525868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.525996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.969  [2024-12-10 00:13:38.526030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.969  qpair failed and we were unable to recover it.
00:32:22.969  [2024-12-10 00:13:38.526145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.526186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.526462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.526498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.526615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.526649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.526773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.526807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.527065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.527100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.527322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.527357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.527546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.527580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.527696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.527727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.527972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.528006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.528133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.528179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.528435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.528469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.528596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.528630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.530086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.530144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.530475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.530516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.530721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.530755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.531009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.531043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.531228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.531264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.531444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.531520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.531758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.531817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.531967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.532006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.532150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.532203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.532399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.532434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.532570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.532605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.532732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.532766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.532892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.532926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.533114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.533147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.533345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.533380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.533579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.533612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.533799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.533834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.534034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.534068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.534210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.534245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.534454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.970  [2024-12-10 00:13:38.534489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.970  qpair failed and we were unable to recover it.
00:32:22.970  [2024-12-10 00:13:38.534671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.534704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.534839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.534873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.535005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.535038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.535301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.535336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.535529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.535562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.535690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.535724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.535891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.535926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.536066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.536099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.536235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.536270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.536462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.536497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.536680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.536713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.536904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.536938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.537082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.537122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.537268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.537303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.537418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.537448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.537647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.537681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.537823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.537856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.537984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.538018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.538127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.538160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.538355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.538389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.538508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.538542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.538721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.538755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.538890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.538924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.539119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.539151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.539296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.539330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.539450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.539483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.539621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.539656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.539766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.539800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.539938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.539970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.540145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.540190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.540316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.540348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.540609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.540643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.971  qpair failed and we were unable to recover it.
00:32:22.971  [2024-12-10 00:13:38.540831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.971  [2024-12-10 00:13:38.540864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.541093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.541126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.541261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.541296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.541429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.541463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.541649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.541682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.541935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.541968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.542085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.542118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.542265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.542302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.542523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.542557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.542751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.542785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.542920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.542953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.543142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.543187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.543391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.543425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.543552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.543585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.543714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.543747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.543874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.543909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.544028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.544062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.544189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.544226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.544411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.544446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.544582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.544618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.544735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.544768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.544922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.544965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.545211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.545255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.545374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.545409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.545522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.545555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.545663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.545696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.545882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.545916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.546119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.546152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.546289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.546326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.546520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.546555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.546687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.546720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.546917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.546951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.972  qpair failed and we were unable to recover it.
00:32:22.972  [2024-12-10 00:13:38.547207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.972  [2024-12-10 00:13:38.547260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.547457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.547490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.547608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.547642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.547790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.547823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.547963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.547998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.548118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.548152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.548290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.548324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.548448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.548481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.548609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.548642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.548778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.548813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.548935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.548969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.549082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.549116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.549254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.549291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.549485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.549520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.549669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.549704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.549889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.549923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.550054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.550088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.550277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.550314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.550511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.550545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.550667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.550703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.550809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.550841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.551046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.551080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.551297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.551334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.551524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.551557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.551670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.551705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.551912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.551946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.552130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.552163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.552313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.552348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.973  [2024-12-10 00:13:38.552456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.973  [2024-12-10 00:13:38.552490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.973  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.552604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.552644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.552824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.552859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.552986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.553020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.553213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.553248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.553362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.553395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.553504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.553538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.553659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.553701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.553918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.553965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.554190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.554248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.554486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.554533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.554656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.554691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.554897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.554932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.555050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.555084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.555245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.555283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.555493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.555529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.555650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.555683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.555930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.555963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.556201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.556236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.556357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.556390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.556505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.556539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.556677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.556711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.556825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.556858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.556988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.557021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.557224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.557261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.557391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.557425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.557551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.557585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.557766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.557800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.557947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.557983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.558090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.558124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.558243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.558277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.558411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.558444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.974  [2024-12-10 00:13:38.558571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.974  [2024-12-10 00:13:38.558605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.974  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.558737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.558770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.558901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.558935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.559124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.559158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.559291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.559331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.559522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.559555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.559674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.559708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.559824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.559857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.560040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.560073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.560249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.560291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.560408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.560451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.560573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.560607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.560715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.560749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.560885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.560918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.561041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.561076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.561207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.561242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.561364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.561397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.561508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.561543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.561665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.561698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.561880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.561913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.562119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.562152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.562281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.562314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.562434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.562467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.562582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.562617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.562730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.562763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.562881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.562915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.563036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.563071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.563190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.563223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.563335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.563368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.563495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.563528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.563738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.563771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.563950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.563983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.975  [2024-12-10 00:13:38.564096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.975  [2024-12-10 00:13:38.564130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.975  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.564252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.564286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.564399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.564432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.564538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.564572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.564696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.564729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.564854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.564887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.565018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.565052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.565231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.565266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.565446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.565480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.565706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.565739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.565851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.565884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.566029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.566063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.566176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.566211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.566330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.566364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.566539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.566574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.566682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.566714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.566826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.566860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.566978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.567018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.567216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.567251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.567363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.567397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.567532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.567565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.567680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.567715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.567913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.567946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.568052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.568086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.568267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.568303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.568434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.568468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.568644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.568678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.568788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.568821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.568929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.568962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.569074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.569107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.569235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.569270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.569477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.569511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.976  [2024-12-10 00:13:38.569648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.976  [2024-12-10 00:13:38.569684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.976  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.569800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.569834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.570030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.570063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.570243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.570279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.570398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.570433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.570554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.570588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.570710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.570742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.570853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.570887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.571153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.571210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.571321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.571354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.571461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.571496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.571635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.571668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.571867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.571902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.572081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.572113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.572250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.572285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.572423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.572457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.572573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.572606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.572722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.572756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.572874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.572907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.573081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.573115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.573252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.573286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.573392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.573426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.573557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.573593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.573778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.573813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.573926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.573959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.574136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.574187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.574470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.574503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.574682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.574716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.574824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.574856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.574999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.575032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.575320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.575354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.575491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.575524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.977  [2024-12-10 00:13:38.575712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.977  [2024-12-10 00:13:38.575746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.977  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.575886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.575921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.576044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.576078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.576190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.576224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.576358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.576391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.576502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.576535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.576660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.576694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.576883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.576917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.577097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.577130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.577335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.577367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.577474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.577506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.577625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.577656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.577763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.577794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.577965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.577995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.578215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.578246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.578361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.578392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.578637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.578667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.578931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.578962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.579080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.579113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.579227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.579258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.579452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.579483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.579595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.579627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.579727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.579757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.579890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.579920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.580090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.580123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.580322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.580353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.580561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.580592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.580708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.580740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.580926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.580958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.581136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.581177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.581283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.581314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.581429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.581460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.978  [2024-12-10 00:13:38.581632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.978  [2024-12-10 00:13:38.581664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.978  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.581848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.581884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.582057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.582087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.582210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.582242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.582385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.582415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.582547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.582577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.582708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.582738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.582914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.582946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.583076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.583107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.583212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.583244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.583454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.583484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.583581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.583612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.583729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.583760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.583875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.583906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.584078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.584109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.584316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.584348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.584453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.584483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.584591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.584622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.584796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.584827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.584925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.584955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.585134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.585174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.585348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.585379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.585574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.585604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.585849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.585879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.585982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.586014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.586133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.586163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.586312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.586343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.586604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.586634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.586809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.586841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.979  qpair failed and we were unable to recover it.
00:32:22.979  [2024-12-10 00:13:38.587019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.979  [2024-12-10 00:13:38.587050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.587157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.587198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.587453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.587483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.587680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.587713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.587822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.587839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.587916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.587931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.588068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.588085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.588173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.588189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.588291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.588305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.588393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.588408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.588495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.588519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.588669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.588686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.588763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.588782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.588987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.589003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.589152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.589174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.589254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.589269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.589407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.589423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.589580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.589597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.589847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.589866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.589949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.589964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.590032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.590048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.590140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.590155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.590299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.590328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.590421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.590437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.590628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.590646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.590729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.590745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.590903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.590920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.591066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.591082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.591155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.591192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.591334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.591352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.591444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.591460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.591557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.591573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.980  [2024-12-10 00:13:38.591657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.980  [2024-12-10 00:13:38.591673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.980  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.591744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.591760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.591849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.591865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.592023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.592041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.592129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.592145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.592228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.592246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.592330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.592345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.592494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.592511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.592594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.592611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.592687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.592702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.592845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.592862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.592963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.592982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.593144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.593161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.593253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.593268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.593346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.593361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.593447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.593461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.593556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.593572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.593711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.593728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.593800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.593816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.593889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.593904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.594110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.594135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.594362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.594392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.594575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.594600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.594700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.594721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.594809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.594828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.594997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.595023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.595183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.595209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.595387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.595413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.981  [2024-12-10 00:13:38.595498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.981  [2024-12-10 00:13:38.595518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.981  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.595677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.595700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.595793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.595812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.595899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.595916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.596125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.596139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.596212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.596224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.596300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.596312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.596381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.596408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.596472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.596483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.596539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.596550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.596688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.596699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.596761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.596772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.596841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.596852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.596984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.596996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.597195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.597211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.598065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.598090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.598235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.598251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.598334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.598344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.598415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.598425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.598504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.598516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.598650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.598662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.598718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.598729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.598787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.598798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.598861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.598872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.598933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.598944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.599085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.599096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.599170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.599182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.599252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.599264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.599418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.599429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.599506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.599518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.599586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.599597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.599677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.599688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.982  [2024-12-10 00:13:38.599781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.982  [2024-12-10 00:13:38.599794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.982  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.599860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.599871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.600506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.600529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.600612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.600623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.600690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.600702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.600899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.600912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.600992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.601003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.601130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.601141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.601273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.601286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.601410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.601431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.601505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.601517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.601594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.601605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.601655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:32:22.983  [2024-12-10 00:13:38.601674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.601685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.601814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.601827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.601961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.601973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.602046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.602057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.602194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.602207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.602278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.602290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.602370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.602381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.602507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.602520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.602579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.602590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.602723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.602736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.602802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.602814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.602879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.602892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.602976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.602987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.603115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.603127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.603214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.603227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.603291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.603303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.603433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.603444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.603512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.603523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.603598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.603609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.603686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.983  [2024-12-10 00:13:38.603698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.983  qpair failed and we were unable to recover it.
00:32:22.983  [2024-12-10 00:13:38.603760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.603771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.603833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.603844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.603928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.603939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.604008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.604020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.604148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.604160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.604240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.604251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.604331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.604342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.604412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.604424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.604491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.604503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.604589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.604600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.604728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.604739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.604809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.604820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.604890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.604902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.604967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.604979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.605055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.605070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.605146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.605157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.605229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.605242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.605335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.605346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.605405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.605416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.605481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.605493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.605555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.605567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.605655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.605669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.605725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.605737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.605864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.605875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.606005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.606017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.606084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.606096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.606160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.606179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.606321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.606333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.606422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.606433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.606503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.606516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.606585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.606596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.606658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.606669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.984  qpair failed and we were unable to recover it.
00:32:22.984  [2024-12-10 00:13:38.606795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.984  [2024-12-10 00:13:38.606806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.606866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.606879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.606937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.606949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.607012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.607023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.607107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.607118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.607250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.607263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.607325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.607337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.607412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.607425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.607488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.607502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.607564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.607577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.607647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.607660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.607729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.607743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.607809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.607822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.607882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.607895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.607958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.607972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.608061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.608082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.608200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.608213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.608286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.608301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.608429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.608442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.608521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.608535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.608603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.608616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.608695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.608708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.608781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.608793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.608865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.608879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.608953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.608966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.609033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.609046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.609109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.609122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.609188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.609202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.609273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.609287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.609516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.609534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.609612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.609625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.609693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.609707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.985  [2024-12-10 00:13:38.609780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.985  [2024-12-10 00:13:38.609793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.985  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.611100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.611142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.611320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.611351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.611520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.611539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.611694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.611708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.611775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.611802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.611879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.611893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.611959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.611974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.612037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.612049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.612120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.612134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.612269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.612284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.612422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.612436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.612567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.612580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.612713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.612727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.612869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.612884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.612953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.612966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.613038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.613052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.613114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.613127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.613257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.613272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.613340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.613353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.613435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.613449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.613546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.613561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.613628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.613641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.613787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.613807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.613888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.613903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.613972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.613984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.614068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.614082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.614162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.614183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.986  [2024-12-10 00:13:38.614250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.986  [2024-12-10 00:13:38.614264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.986  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.614346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.614360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.614489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.614503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.614636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.614650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.614721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.614734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.614811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.614825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.614904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.614917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.615072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.615085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.615222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.615236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.615298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.615314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.615402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.615415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.615574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.615588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.615658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.615672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.615734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.615747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.615820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.615834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.615907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.615921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.616062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.616077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.616215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.616229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.616291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.616305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.616402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.616417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.616561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.616575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.616649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.616664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.616741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.616756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.616888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.616902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.616982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.616995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.617068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.617080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.617220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.617234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.617296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.617308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.617378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.617392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.617465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.617479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.617548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.617563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.617639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.617654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.987  [2024-12-10 00:13:38.617721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.987  [2024-12-10 00:13:38.617736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.987  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.617810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.617824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.617899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.617915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.617996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.618011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.618085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.618104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.618178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.618193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.618278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.618293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.618370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.618387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.618464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.618481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.618563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.618578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.618682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.618699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.618832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.618847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.618914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.618929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.619062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.619078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.619154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.619176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.619317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.619334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.619401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.619415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.619553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.619569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.619704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.619720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.619804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.619820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.619888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.619903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.619969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.619984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.620056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.620070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.620135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.620149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.620353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.620370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.620445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.620461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.620547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.620563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.620628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.620643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.620728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.620744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.620816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.620832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.620971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.620986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.621074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.621091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.621164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.988  [2024-12-10 00:13:38.621225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.988  qpair failed and we were unable to recover it.
00:32:22.988  [2024-12-10 00:13:38.621310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.621333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.621415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.621439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.621554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.621578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.621690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.621712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.621799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.621821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.621914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.621938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.622038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.622061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.622198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.622224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.622384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.622412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.622510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.622532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.622655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.622675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.622825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.622849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.622959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.622980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.623212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.623231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.623311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.623328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.623460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.623476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.623560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.623576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.623734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.623750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.623820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.623835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.623899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.623914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.623988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.624004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.624092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.624107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.624181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.624198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.624283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.624302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.624396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.624411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.624623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.624639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.624711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.624727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.624861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.624877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.625023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.625039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.625134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.625156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.625241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.625265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.625339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.989  [2024-12-10 00:13:38.625354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.989  qpair failed and we were unable to recover it.
00:32:22.989  [2024-12-10 00:13:38.625491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.625507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.625640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.625656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.625803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.625818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.625886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.625902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.625980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.625997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.626082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.626098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.626184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.626200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.626285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.626300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.626382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.626398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.626465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.626480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.626565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.626581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.626724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.626739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.626883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.626898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.626959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.626974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.627125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.627140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.627216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.627232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.627325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.627342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.627426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.627443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.627521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.627540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.627613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.627635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.627709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.627728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.627825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.627842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.627980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.627999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.628081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.628097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.628179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.628197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.628271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.628289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.628431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.628449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.628519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.628536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.628624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.628641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.628712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.628729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.628826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.628844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.628981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.990  [2024-12-10 00:13:38.628999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.990  qpair failed and we were unable to recover it.
00:32:22.990  [2024-12-10 00:13:38.629137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.629154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.629278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.629297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.629369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.629387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.629456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.629474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.629610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.629628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.629708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.629725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.629880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.629897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.629987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.630004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.630101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.630119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.630204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.630224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.630298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.630329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.630510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.630528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.630600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.630631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.630781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.630798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.631016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.631036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.631125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.631145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.631223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.631241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.631344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.631363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.631502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.631520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.631593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.631611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.631701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.631718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.631809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.631826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.631961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.631980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.632062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.632081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.632161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.632188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.632348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.632366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.632504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.632522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.632637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.632659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.632815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.632833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.632988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.633006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.991  qpair failed and we were unable to recover it.
00:32:22.991  [2024-12-10 00:13:38.633094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.991  [2024-12-10 00:13:38.633111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.633262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.633281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.633364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.633382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.633455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.633473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.633617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.633635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.633841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.633859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.633951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.633968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.634046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.634064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.634233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.634252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.634412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.634431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.634513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.634531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.634619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.634637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.634795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.634813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.634885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.634902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.634996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.635015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.635111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.635128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.635214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.635233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.635307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.635324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.635468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.635486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.635561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.635580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.635672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.635690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.635768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.635785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.635999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.636018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.636100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.636118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.636205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.636225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.636363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.636381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.636453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.636471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.992  [2024-12-10 00:13:38.636613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.992  [2024-12-10 00:13:38.636631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.992  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.636772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.636791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.636940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.636958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.637046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.637064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.637135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.637152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.637247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.637265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.637329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.637347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.637483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.637500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.637582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.637603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.637695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.637715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.637804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.637829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.637913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.637935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.638113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.638135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.638268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.638292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.638392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.638414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.638506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.638528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.638675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.638695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.638842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.638863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.639021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.639043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.639206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.639228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.639389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.639409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.639491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.639512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.639594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.639615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.639703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.639725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.639819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.639842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.639989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.640010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.640182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.640204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.640291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.640312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.640544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.640565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.640647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.640669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.640835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.640855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.641003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.641024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.993  [2024-12-10 00:13:38.641171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.993  [2024-12-10 00:13:38.641195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.993  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.641338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.641359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.641458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.641478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.641657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.641679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.641765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.641786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.641880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.641901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.641983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.642004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.642094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.642115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.642249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.642272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.642416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.642438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.642598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.642619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.642766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.642787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.642934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.642955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.643064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.643085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.643202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.643225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.643368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.643389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.643490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.643511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.643596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.643617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.643775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.643801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.643894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.643915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.644015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.644036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.644179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.644200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.644289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.644311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.644393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.644415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.644515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.644537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.644758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.644780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.644936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.644957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.645051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.645073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.645156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.645197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.645361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.645383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.645478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.645498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.645576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.645597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.645696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.994  [2024-12-10 00:13:38.645717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.994  qpair failed and we were unable to recover it.
00:32:22.994  [2024-12-10 00:13:38.645952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.645974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.646081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.646102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.646245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.646267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.646412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.646434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.646584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.646607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.646756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.646777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.646861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.646882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.646964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.646984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.646993] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:32:22.995  [2024-12-10 00:13:38.647019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:32:22.995  [2024-12-10 00:13:38.647027] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:32:22.995  [2024-12-10 00:13:38.647034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:32:22.995  [2024-12-10 00:13:38.647039] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:32:22.995  [2024-12-10 00:13:38.647128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.647147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.647297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.647319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.647467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.647489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.647648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.647669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.647773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.647797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.647915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.647941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.648036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.648061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.648156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.648188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.648302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.648326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.648426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.648450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.648541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.648567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.648549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:32:22.995  [2024-12-10 00:13:38.648720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.648662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:32:22.995  [2024-12-10 00:13:38.648747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.648763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7
00:32:22.995  [2024-12-10 00:13:38.648763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:32:22.995  [2024-12-10 00:13:38.648849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.648873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.648974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.648997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.649155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.649242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.649389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.649428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.649539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.649574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.649682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.649709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.649800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.649825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.649944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.649970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.995  [2024-12-10 00:13:38.650149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.995  [2024-12-10 00:13:38.650181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.995  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.650350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.650375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.650617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.650641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.650751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.650777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.650956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.650981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.651095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.651120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.651214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.651239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.651350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.651378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.651582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.651607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.651701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.651725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.651842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.651867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.651957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.651981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.652065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.652089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.652259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.652285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.652405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.652430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.652523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.652548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.652706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.652730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.652820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.652846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.653062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.653087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.653182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.653207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.653302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.653328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.653495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.653521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.653771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.653795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.653914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.653940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.654045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.654071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.654159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.654200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.654309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.654335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.654508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.654533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.654634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.654659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.654829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.654856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.654957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.654983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.655075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.655101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.996  [2024-12-10 00:13:38.655210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.996  [2024-12-10 00:13:38.655238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.996  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.655328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.655354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.655614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.655688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.655916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.655962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.656102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.656140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.656441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.656479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.656602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.656637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.656752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.656785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.657029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.657064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.657190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.657225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.657368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.657403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.657519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.657555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.657665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.657699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.657875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.657909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.658029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.658064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.658243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.658279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.658410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.658444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.658553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.658587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.658779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.658812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.659055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.659088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.659202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.659237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.659364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.659397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.659508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.659542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.659655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.659691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.659812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.659847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.660018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.660053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.660243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.660278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.660410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.660444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.997  [2024-12-10 00:13:38.660552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.997  [2024-12-10 00:13:38.660586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.997  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.660725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.660762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.660947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.660982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.661091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.661125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.661260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.661294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.661477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.661512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.661636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.661669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.661783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.661824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.662007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.662041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.662159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.662243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.662437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.662473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.662594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.662626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.662731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.662762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.662860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.662891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.662999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.663037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.663176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.663208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.663389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.663421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.663542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.663575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.663694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.663724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.663900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.663931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.664034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.664065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.664190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.664222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.664336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.664367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.664482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.664512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.664681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.664711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.664891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.664922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.665019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.665051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.665229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.665264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.665375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.665407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.665582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.665614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.665783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.665815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.665930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.665961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.998  qpair failed and we were unable to recover it.
00:32:22.998  [2024-12-10 00:13:38.666086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.998  [2024-12-10 00:13:38.666116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.666242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.666274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.666372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.666403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.666581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.666613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.666716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.666748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.666911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.666944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.667116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.667148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.667275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.667306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.667411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.667442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.667616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.667647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.667781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.667812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.667983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.668046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.668181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.668221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.668420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.668454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.668655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.668691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.668814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.668848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.668967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.669002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.669114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.669148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.669347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.669382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.669495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.669529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.669661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.669694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.669808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.669841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.670087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.670128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.670253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.670288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.670410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.670443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.670554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.670588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.670776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.670809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.670929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.670962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.671189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.671229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.671404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.671439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.671555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.671589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.671699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.671732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.671840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.671874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.671985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.672019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.672192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:22.999  [2024-12-10 00:13:38.672227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:22.999  qpair failed and we were unable to recover it.
00:32:22.999  [2024-12-10 00:13:38.672352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.672388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.672513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.672558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.672735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.672769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.672893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.672929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.673060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.673092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.673217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.673253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.673377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.673410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.673515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.673548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.673669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.673703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.673831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.673865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.674053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.674087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.674231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.674266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.674370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.674404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.674513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.674547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.674664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.674698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.674813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.674847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.674986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.675022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.675141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.675183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.675299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.675333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.675525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.675560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.675685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.675719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.675896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.675931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.676120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.676154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.676278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.676312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.676430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.676464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.676581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.676615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.676744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.676778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.676905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.676945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.677118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.677152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.677287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.677321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.677438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.677473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.677594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.677628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.677821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.677856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.677975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.678009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.678138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.000  [2024-12-10 00:13:38.678182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.000  qpair failed and we were unable to recover it.
00:32:23.000  [2024-12-10 00:13:38.678294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.678327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.678520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.678555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.678662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.678695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.678801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.678835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.678951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.678985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.679087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.679121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.679319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.679354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.679470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.679504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.679680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.679714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.679894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.679930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.680119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.680153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.680276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.680311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.680415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.680450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.680625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.680659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.680768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.680802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.680919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.680954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.681077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.681112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.681271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.681307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.681423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.681459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.681586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.681621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.681740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.681776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.681895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.681930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.682055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.682091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.682228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.682266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.682375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.682410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.682541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.682578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.682692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.682726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.682837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.682871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.683053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.683088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.683267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.683303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.683437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.683473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.683581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.683618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.683728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.683769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.683892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.683927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.001  qpair failed and we were unable to recover it.
00:32:23.001  [2024-12-10 00:13:38.684052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.001  [2024-12-10 00:13:38.684087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.684213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.684249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.684378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.684413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.684588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.684625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.684750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.684784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.684982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.685017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.685192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.685228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.685416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.685450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.685578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.685611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.685740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.685773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.685888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.685921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.686106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.686139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.686419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.686454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.686675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.686708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.686827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.686860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.687053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.687086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.687301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.687336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.687591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.687624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.687793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.687824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.687990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.688020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.688131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.688162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.688293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.688323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.688496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.688527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.688639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.688670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.688842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.688872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.689019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.689051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.689235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.689266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.689382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.689413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.689527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.689558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.689746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.689778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.689963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.689996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.690122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.690154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.690312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.690344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.002  [2024-12-10 00:13:38.690473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.002  [2024-12-10 00:13:38.690504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.002  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.690621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.690652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.690753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.690786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.690897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.690930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.691052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.691083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.691197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.691236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.691473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.691504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.691627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.691660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.691854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.691886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.692086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.692117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.692328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.692361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.692530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.692561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.692797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.692829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.692995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.693027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.693242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.693274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.693444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.693475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.693582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.693613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.693725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.693757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.693876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.693907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.694016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.694048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.694232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.694265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.694440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.694470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.694638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.694669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.694888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.694919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.695212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.695248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.695488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.695519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.695708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.695739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.696020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.696052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.696159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.696198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.696332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.696362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.696541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.696573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.696744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.696775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.696901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.696933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.697118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.697151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.697293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.697325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.697542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.697573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.697708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.697739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.003  qpair failed and we were unable to recover it.
00:32:23.003  [2024-12-10 00:13:38.697850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.003  [2024-12-10 00:13:38.697881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.698139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.698183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.698358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.698393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.698525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.698559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.698800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.698834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.699009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.699042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.699232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.699269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.699441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.699475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.699658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.699706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.699847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.699882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.700073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.700107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.700251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.700287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.700404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.700437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.700556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.700590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.700820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.700855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.701033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.701067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.701231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.701266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.701549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.701584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.701712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.701745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.701990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.702026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.702276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.702312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.702499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.702533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.702667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.702702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.702902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.702936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.703075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.703109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.703341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.703377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.703564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.703598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.703776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.703809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.703921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.703955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.704164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.704211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.704351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.704384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.704503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.704538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.704735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.704769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.704958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.704993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.705115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.705149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.705389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.705452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.004  [2024-12-10 00:13:38.705591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.004  [2024-12-10 00:13:38.705625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.004  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.705745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.705779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.705963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.705996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.706233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.706268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.706444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.706477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.706666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.706699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.706825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.706858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.707050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.707083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.707254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.707288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.707408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.707440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.707609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.707642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.707760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.707793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.707972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.708020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.708128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.708161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.708297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.708330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.708450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.708482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.708600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.708632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.708744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.708777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.708995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.709028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.709149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.709194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.709438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.709473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.709583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.709616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.709729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.709763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.709894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.709927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.710042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.710075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.710196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.710231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.710350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.710383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.710575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.710608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.710783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.710815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.710989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.711022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.711134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.711178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.711359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.711392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.711511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.711544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.711715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.711748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.712003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.712036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.712148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.712190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.712304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.712337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.712458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.712491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.712666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.712699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.712925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.712979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.005  [2024-12-10 00:13:38.713106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.005  [2024-12-10 00:13:38.713140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.005  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.713372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.713407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.713588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.713620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.713747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.713780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.713947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.713981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.714181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.714216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.714402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.714437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.714550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.714583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.714709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.714743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.714862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.714893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.715004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.715037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.715144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.715188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.715301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.715333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.715463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.715498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.715620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.715653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.715921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.715955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.716071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.716104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.716249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.716283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.716406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.716439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.716542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.716575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.716754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.716788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.716920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.716953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.717070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.717104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.717225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.717259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.717382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.717415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.717531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.717564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.717679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.717719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.717910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.717944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.718050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.718083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.718261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.718297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.718426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.718460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.718639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.718672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.718797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.718828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.718999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.719033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.719153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.719196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.719306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.719338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.719542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.719575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.719764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.719799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.720064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.720097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.720216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.720252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.720425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.006  [2024-12-10 00:13:38.720459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.006  qpair failed and we were unable to recover it.
00:32:23.006  [2024-12-10 00:13:38.720585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.720619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.720732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.720765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.720957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.720990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.721116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.721149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.721349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.721384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.721523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.721556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.721690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.721723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.721972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.722005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.722128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.722162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.722300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.722333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.722447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.722481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.722600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.722633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.722745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.722785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.722999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.723033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.723276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.723312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.723421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.723456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.723565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.723598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.723719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.723751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.723877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.723911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.724029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.724062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.724234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.724268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.724457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.724490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.724605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.724637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.724814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.724848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.724972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.725005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.725135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.725177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.725315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.725352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.725477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.725510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.725621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.725655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.725776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.725809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.725988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.726021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.726138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.726182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.726317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.726351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.726577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.726611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.726731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.726764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.726885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.726918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.727098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.727132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.007  qpair failed and we were unable to recover it.
00:32:23.007  [2024-12-10 00:13:38.727320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.007  [2024-12-10 00:13:38.727357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.727577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.727609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.727796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.727835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.728016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.728049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.728178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.728213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.728340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.728374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.728494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.728527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.728715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.728748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.728920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.728952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.729080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.729113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.729234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.729268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.729440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.729473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.729653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.729687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.729804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.729837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.729944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.729977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.730097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.730131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.730377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.730426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.730553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.730587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.730707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.730740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.730858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.730892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.731071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.731105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.731295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.731330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.731447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.731482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.731723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.731757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.731882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.731916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.732035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.732069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.732184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.732219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.732398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.732431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.732637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.732670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.732821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.732861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.733049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.733082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.733258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.733292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.733411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.733444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.733634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.733668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.733839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.733873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.734044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.734079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.734255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.734289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.734475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.734509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.734643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.734676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.734793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.734826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.734931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.734963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.735212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.735247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.735359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.008  [2024-12-10 00:13:38.735393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.008  qpair failed and we were unable to recover it.
00:32:23.008  [2024-12-10 00:13:38.735585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.735618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.735762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.735795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.735978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.736011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.736206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.736239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.736364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.736397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.736509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.736542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.736656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.736688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.736809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.736844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.736946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.736978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.737096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.737130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.737276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.737310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.737444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.737478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.737588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.737621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.737745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.737783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.737901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.737933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.738106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.738140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.738324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.738359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.738551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.738584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.738783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.738817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.738927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.738960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.739079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.739112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.739235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.739270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.739405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.739440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.739571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.739605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.739782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.739815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.739954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.739988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.740104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.740137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.740295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.740329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.740439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.740474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.740592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.740625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.740751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.740790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.740901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.740939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.741050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.741084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.741210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.741245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.741365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.741399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.741596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.741631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.741745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.741779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.741896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.741928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.742030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.742064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.742182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.742216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.742348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.742390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.742522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.742557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.742671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.742704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.742822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.009  [2024-12-10 00:13:38.742855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.009  qpair failed and we were unable to recover it.
00:32:23.009  [2024-12-10 00:13:38.742967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.743001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.743111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.743145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.743280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.743314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.743442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.743476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.743655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.743689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.743803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.743835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.744011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.744044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.744165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.744210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.744325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.744357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.744471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.744505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.744627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.744661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.744902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.744936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.745122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.745155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.745350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.745385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.745559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.745593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.745717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.745751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.745864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.745898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.746045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.746079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.746258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.746292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.746411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.746444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.746563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.746596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.746706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.746740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.746915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.746948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.747066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.747102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.747237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.747271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.747475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.747507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.747675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.747708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.747833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.747865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.748042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.748074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.748185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.748219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.748328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.748361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.748562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.748594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.748703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.748736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.748926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.748959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.749076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.749109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.749237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.749270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.749369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.749408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.749521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.749555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.749678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.749711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.749822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.749855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.749980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.750013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.750130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.750164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.750292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.750325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.750431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.750465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.750573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.010  [2024-12-10 00:13:38.750608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.010  qpair failed and we were unable to recover it.
00:32:23.010  [2024-12-10 00:13:38.750735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.750768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.750946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.750979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.751125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.751158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.751297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.751331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.751510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.751545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.751685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.751725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.751834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.751864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.751972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.752002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.752123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.752153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.752268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.752299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.752540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.752572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.752680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.752713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:23.011  [2024-12-10 00:13:38.752815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.752846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.752957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.752989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0
00:32:23.011  [2024-12-10 00:13:38.753115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.753146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.753258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.753289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.753391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.753424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:32:23.011  [2024-12-10 00:13:38.753549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.753583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.753704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.753736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:23.011  [2024-12-10 00:13:38.753852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.753884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.754058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:23.011  [2024-12-10 00:13:38.754091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.754227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.754261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.754436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.754467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.754590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.754622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.754723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.754755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.754865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.754897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.755000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.755031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.755220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.755254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.755438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.755471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.755583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.755620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.755761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.755793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.755903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.755935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.756039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.756072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.756185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.756218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.756328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.756361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.756536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.756567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.756676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.756708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.756823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.756855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.757041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.757071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.757184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.757217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.011  qpair failed and we were unable to recover it.
00:32:23.011  [2024-12-10 00:13:38.757402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.011  [2024-12-10 00:13:38.757435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.757553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.757585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.757687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.757719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.757865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.757899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.758006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.758039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.758184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.758217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.758343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.758374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.758501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.758533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.758650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.758682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.758795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.758827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.759042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.759074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.759204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.759237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.759357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.759389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.759510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.759542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.759712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.759744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.759869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.759900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.760010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.760042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.760152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.760192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.760317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.760349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.760459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.760492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.760675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.760707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.760809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.760842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.760962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.760996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.761176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.761209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.761321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.761353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.761540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.761572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.761687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.761719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.761831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.761862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.761985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.762017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.762204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.762244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.762420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.762453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.762555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.762587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.762708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.762740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.762860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.762892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.763077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.763109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.763245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.763278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.763381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.763415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.763527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.763559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.763666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.763699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.763809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.763841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.763961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.763993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.764115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.764147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.764266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.764298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.764434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.764469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.764600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.764633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.764763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.764795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.012  [2024-12-10 00:13:38.764904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.012  [2024-12-10 00:13:38.764935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.012  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.765050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.765082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.765195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.765230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.765353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.765386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.765515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.765549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.765739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.765774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.765897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.765928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.766043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.766075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.766252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.766288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.766421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.766457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.766599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.766641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.766755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.766789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.766899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.766931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.767034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.767067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.767187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.767221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.767386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.767419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.767547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.767579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.767703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.767734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.767844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.767877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.767986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.768019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.768230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.768264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.768402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.768434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.768611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.768643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.768903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.768942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.769066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.769098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.769226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.769260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.769376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.769408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.769531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.769563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.769679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.769711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.769841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.769873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.769986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.770020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.770137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.770177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.770303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.770335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.770516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.770549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.770673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.770706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.770859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.770891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.771015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.771047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.771197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.771230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.771365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.771398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.771499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.771533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.013  [2024-12-10 00:13:38.771676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.013  [2024-12-10 00:13:38.771709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.013  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.771811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.771844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.772018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.772051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.772155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.772199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.772419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.772451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.772557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.772590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.772700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.772732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.772845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.772878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.773001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.773034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.773138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.773179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.773315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.773358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.773496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.773534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.773657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.773689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.773794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.773827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.773936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.773969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.774084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.774121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.774331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.774367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.774477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.774508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.774651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.774684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.774876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.774908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.775035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.775066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.775185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.775217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.775333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.775365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.775542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.775579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.775710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.775742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.775865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.775896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.776024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.776055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.776185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.776218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.014  [2024-12-10 00:13:38.776339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.014  [2024-12-10 00:13:38.776372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.014  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.776486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.776518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.776623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.776654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.776765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.776798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.776900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.776930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.777065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.777096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.777227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.777261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.777362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.777394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.777570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.777603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.777732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.777764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.777884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.777916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.778023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.778054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.778230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.778262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.778381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.778412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.778515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.778548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.778670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.778702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.778882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.778914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.779025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.779058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.779249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.779282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.779386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.779417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.779532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.779564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.779740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.779771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.779954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.779994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.780188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.780228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.780352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.780388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.780505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.780538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.780647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.015  [2024-12-10 00:13:38.780679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.015  qpair failed and we were unable to recover it.
00:32:23.015  [2024-12-10 00:13:38.780790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.780822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.780930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.780964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.781086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.781117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.781233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.781266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.781374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.781406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.781519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.781550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.781678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.781711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.781948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.781982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.782098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.782136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.782261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.782297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.782424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.782456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.782566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.782600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.782703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.782736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.782916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.782948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.783055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.783088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.783266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.783300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.783406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.783438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.783548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.783580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.783687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.783719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.783848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.783880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.783988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.784021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.784193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.784227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.784344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.784376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.784484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.784515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.784626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.784658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.784788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.784819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.784931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.784962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.785066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.785097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.785206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.785237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.785343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.785376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.785479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.785513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.016  [2024-12-10 00:13:38.785681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.016  [2024-12-10 00:13:38.785714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.016  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.785851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.785883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.785993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.786023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.786138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.786181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.786316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.786348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.786519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.786550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.786668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.786701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.786805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.786836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.786963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.786996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb74000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.787102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.787138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.787331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.787363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.787476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.787507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.787617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.787647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.787762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.787793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.787897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.787927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.788098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.788129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.788270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.788302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.788410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.788448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.788553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.788584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.788695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.788727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.788838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.788875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:32:23.017  [2024-12-10 00:13:38.788994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.789027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.789205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.789238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:32:23.017  [2024-12-10 00:13:38.789357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.789393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.789508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.789539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:23.017  [2024-12-10 00:13:38.789716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.789750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.789864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.789896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.790003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:23.017  [2024-12-10 00:13:38.790036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.790142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.790183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.790306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.790338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.017  qpair failed and we were unable to recover it.
00:32:23.017  [2024-12-10 00:13:38.790443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.017  [2024-12-10 00:13:38.790474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.018  qpair failed and we were unable to recover it.
00:32:23.018  [2024-12-10 00:13:38.790585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.018  [2024-12-10 00:13:38.790617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.018  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.790793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.790825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.791025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.791057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.791181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.791214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.791391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.791423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.791535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.791566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.791692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.791723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.791829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.791860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.791973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.792004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.792239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.792271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.792557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.792589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.792690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.792727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.792848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.792879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.792989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.793020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.793124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.793156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.793287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.793320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.793435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.793466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.793581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.793612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.793715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.793746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.793935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.793966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.285  [2024-12-10 00:13:38.794195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.285  [2024-12-10 00:13:38.794228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.285  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.794474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.794506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.794678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.794709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.794811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.794843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.795079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.795110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.795233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.795266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.795375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.795406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.795585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.795617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.795731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.795763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.795875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.795907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.796181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.796218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.796401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.796433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.796629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.796660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.796830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.796861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.797030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.797062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.797181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.797213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.797341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.797373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.797548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.797581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.797718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.797750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.797856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.797888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.798078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.798110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.798241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.798273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.798394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.798428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.798674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.798705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.798829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.798860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.798965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.798997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.799164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.799209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.799449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.799481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.799663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.799695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.799809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.799841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.799960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.799992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.800101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.286  [2024-12-10 00:13:38.800140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.286  qpair failed and we were unable to recover it.
00:32:23.286  [2024-12-10 00:13:38.800345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.800377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.800486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.800519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.800709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.800741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.800854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.800886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.800994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.801027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.801219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.801252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.801493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.801525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.801695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.801727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.801934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.801966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.802135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.802177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.802301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.802333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.802504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.802537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.802662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.802694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.802909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.802941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.803055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.803089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.803203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.803237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.803437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.803470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.803606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.803637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.803903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.803934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.804038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.804072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.804183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.804216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.804323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.804356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.804497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.804530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.804771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.804802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.804992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.805024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.805140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.805181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.805368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.805400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.805526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.805558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.805661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.805693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.805891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.805925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.806099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.806131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.806356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.287  [2024-12-10 00:13:38.806389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.287  qpair failed and we were unable to recover it.
00:32:23.287  [2024-12-10 00:13:38.806653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.806685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.806921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.806954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.807144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.807203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.807322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.807354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.807532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.807564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.807736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.807768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.807939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.807977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.808143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.808196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.808439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.808472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.808656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.808688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.808829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.808860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.809051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.809083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.809220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.809255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.809385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.809419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.809549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.809581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.809764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.809795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.809907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.809939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.810052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.810085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.810192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.810225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.810396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.810430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.810559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.810591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.810862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.810896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.810999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.811033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.811154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.811205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.811312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.811344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.811529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.811563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.811736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.811768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.811962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.811994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.812174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.812208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.812400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.812433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.812615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.812647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.288  qpair failed and we were unable to recover it.
00:32:23.288  [2024-12-10 00:13:38.812752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.288  [2024-12-10 00:13:38.812783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.812914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.812946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.813055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.813087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.813275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.813309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.813497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.813529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.813713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.813747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.813940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.813974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.814265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.814300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.814424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.814456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.814590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.814623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.814864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.814897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.815071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.815104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.815228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.815261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.815373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.815405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.815580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.815612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.815872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.815904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.816153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.816199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.816328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.816360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.816479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.816512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.816696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.816726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.816838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.816870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.817040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.817072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.817323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.817356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.817597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.817628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.817748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.817779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.818086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.818118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  Malloc0
00:32:23.289  [2024-12-10 00:13:38.818372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.818405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.818657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.818689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.818929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.818961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:23.289  [2024-12-10 00:13:38.819081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.819118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289  [2024-12-10 00:13:38.819312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.289  [2024-12-10 00:13:38.819345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.289  qpair failed and we were unable to recover it.
00:32:23.289   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o
00:32:23.289  [2024-12-10 00:13:38.819468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.819501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:23.290  [2024-12-10 00:13:38.819762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.819794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.819920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.819952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:23.290  [2024-12-10 00:13:38.820214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.820247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.820416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.820448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.820631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.820663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.820781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.820813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.821071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.821103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.821218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.821251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.821509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.821541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.821731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.821768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.821890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.821921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.822105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.822136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.822358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.822407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.822533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.822572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.822679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.822710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.822826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.822858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.823061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.823092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.823265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.823298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.823412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.823444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.823638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.823670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.823884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.823915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.824110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.824141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.824335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.824366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.824489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.824521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.824650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.824682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.290  [2024-12-10 00:13:38.824792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.290  [2024-12-10 00:13:38.824823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.290  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.824985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.825016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.825188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.825220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.825393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.825424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.825538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.825569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.825745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.825777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.825826] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:32:23.291  [2024-12-10 00:13:38.825890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.825920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.826109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.826140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.826273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.826305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.826424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.826456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.826721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.826753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.826963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.826994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.827192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.827224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.827352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.827384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.827621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.827653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.827913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.827943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.828125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.828156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.828278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.828311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.828495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.828526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.828699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.828731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb68000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.828996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.829031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.829240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.829279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.829477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.829510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.829682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.829714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.829914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.829947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.830135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.830175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.830368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.830400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.830524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.830557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.830673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.830705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.830931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.830963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.831186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.831220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.831345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.831376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.291  qpair failed and we were unable to recover it.
00:32:23.291  [2024-12-10 00:13:38.831497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.291  [2024-12-10 00:13:38.831530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.831746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.831778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.831902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.831934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.832102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.832133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb491a0 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.832360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.832395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.832591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.832623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.832804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.832835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.833024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.833055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.833244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.833277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.833449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.833480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.833652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.833683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.833861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.833893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.834133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.834164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.834301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.834333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:23.292  [2024-12-10 00:13:38.834572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.834605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.834864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.834895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:32:23.292  [2024-12-10 00:13:38.835132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.835164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:23.292  [2024-12-10 00:13:38.835416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.835455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:23.292  [2024-12-10 00:13:38.835724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.835757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.836044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.836075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.836254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.836288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.836527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.836560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.836734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.836765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.837031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.837063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.837194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.837226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.837361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.837393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.837522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.837554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.837815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.837845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.837963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.837994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.838235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.838268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.292  qpair failed and we were unable to recover it.
00:32:23.292  [2024-12-10 00:13:38.838471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.292  [2024-12-10 00:13:38.838508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.838679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.838711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.838881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.838913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.839193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.839226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.839486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.839517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.839716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.839749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.839998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.840030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.840215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.840248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.840512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.840544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.840732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.840763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.841020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.841052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.841257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.841290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.841485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.841516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.841685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.841716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.841914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.841947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.842210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.842242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.842364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.842396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.842563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:23.293  [2024-12-10 00:13:38.842595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.842838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.842870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:32:23.293  [2024-12-10 00:13:38.843127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.843160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:23.293  [2024-12-10 00:13:38.843447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.843480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:23.293  [2024-12-10 00:13:38.843664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.843696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.843813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.843844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.843957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.843989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.844198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.844231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.844419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.844457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.844575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.844607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.844721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.844753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.844921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.844952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.845080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.845112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.293  [2024-12-10 00:13:38.845292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.293  [2024-12-10 00:13:38.845325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.293  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.845516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.845547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.845798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.845829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.846101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.846133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.846400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.846432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.846560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.846591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.846699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.846730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.846988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.847019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.847257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.847290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.847472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.847504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.847710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.847742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.847872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.847904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.848077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.848109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.848255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.848288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.848477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.848508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.848744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.848776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.849073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.849104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.849289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.849322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.849576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.849607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.849785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.849816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.850078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.850110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.850236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.850269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.850444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.850476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:23.294  [2024-12-10 00:13:38.850681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.850713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.850973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:32:23.294  [2024-12-10 00:13:38.851005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.851201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.851234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:23.294  [2024-12-10 00:13:38.851416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.851449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.851569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.851601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.851719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.851750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.851946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.851977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.294  [2024-12-10 00:13:38.852083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.294  [2024-12-10 00:13:38.852115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.294  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.852295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.295  [2024-12-10 00:13:38.852328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.852432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.295  [2024-12-10 00:13:38.852463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.852702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.295  [2024-12-10 00:13:38.852739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.852853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.295  [2024-12-10 00:13:38.852884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.853013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.295  [2024-12-10 00:13:38.853044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.853214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.295  [2024-12-10 00:13:38.853248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.853431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.295  [2024-12-10 00:13:38.853463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.853668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.295  [2024-12-10 00:13:38.853699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.853824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111
00:32:23.295  [2024-12-10 00:13:38.853855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcb6c000b90 with addr=10.0.0.2, port=4420
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.854046] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:32:23.295  [2024-12-10 00:13:38.856484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.295  [2024-12-10 00:13:38.856598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.295  [2024-12-10 00:13:38.856641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.295  [2024-12-10 00:13:38.856663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.295  [2024-12-10 00:13:38.856682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.295  [2024-12-10 00:13:38.856735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:23.295   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:32:23.295   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:23.295   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:23.295  [2024-12-10 00:13:38.866419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.295  [2024-12-10 00:13:38.866507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.295  [2024-12-10 00:13:38.866541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.295  [2024-12-10 00:13:38.866566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.295  [2024-12-10 00:13:38.866585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.295  [2024-12-10 00:13:38.866625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:23.295   00:13:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3250428
00:32:23.295  [2024-12-10 00:13:38.876388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.295  [2024-12-10 00:13:38.876455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.295  [2024-12-10 00:13:38.876477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.295  [2024-12-10 00:13:38.876489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.295  [2024-12-10 00:13:38.876499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.295  [2024-12-10 00:13:38.876524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.886441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.295  [2024-12-10 00:13:38.886506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.295  [2024-12-10 00:13:38.886522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.295  [2024-12-10 00:13:38.886531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.295  [2024-12-10 00:13:38.886538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.295  [2024-12-10 00:13:38.886557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.896415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.295  [2024-12-10 00:13:38.896475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.295  [2024-12-10 00:13:38.896489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.295  [2024-12-10 00:13:38.896496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.295  [2024-12-10 00:13:38.896501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.295  [2024-12-10 00:13:38.896516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.906323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.295  [2024-12-10 00:13:38.906375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.295  [2024-12-10 00:13:38.906387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.295  [2024-12-10 00:13:38.906397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.295  [2024-12-10 00:13:38.906403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.295  [2024-12-10 00:13:38.906417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.295  qpair failed and we were unable to recover it.
00:32:23.295  [2024-12-10 00:13:38.916414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.295  [2024-12-10 00:13:38.916469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.295  [2024-12-10 00:13:38.916482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.295  [2024-12-10 00:13:38.916489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.296  [2024-12-10 00:13:38.916495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.296  [2024-12-10 00:13:38.916509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.296  qpair failed and we were unable to recover it.
00:32:23.296  [2024-12-10 00:13:38.926468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.296  [2024-12-10 00:13:38.926523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.296  [2024-12-10 00:13:38.926536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.296  [2024-12-10 00:13:38.926543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.296  [2024-12-10 00:13:38.926549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.296  [2024-12-10 00:13:38.926563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.296  qpair failed and we were unable to recover it.
00:32:23.296  [2024-12-10 00:13:38.936511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.296  [2024-12-10 00:13:38.936578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.296  [2024-12-10 00:13:38.936615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.296  [2024-12-10 00:13:38.936622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.296  [2024-12-10 00:13:38.936628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.296  [2024-12-10 00:13:38.936655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.296  qpair failed and we were unable to recover it.
00:32:23.296  [2024-12-10 00:13:38.946519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.296  [2024-12-10 00:13:38.946569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.296  [2024-12-10 00:13:38.946584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.296  [2024-12-10 00:13:38.946590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.296  [2024-12-10 00:13:38.946596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.296  [2024-12-10 00:13:38.946614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.296  qpair failed and we were unable to recover it.
00:32:23.296  [2024-12-10 00:13:38.956580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.296  [2024-12-10 00:13:38.956633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.296  [2024-12-10 00:13:38.956646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.296  [2024-12-10 00:13:38.956652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.296  [2024-12-10 00:13:38.956658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.296  [2024-12-10 00:13:38.956672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.296  qpair failed and we were unable to recover it.
00:32:23.296  [2024-12-10 00:13:38.966571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.296  [2024-12-10 00:13:38.966628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.296  [2024-12-10 00:13:38.966641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.296  [2024-12-10 00:13:38.966647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.296  [2024-12-10 00:13:38.966653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.296  [2024-12-10 00:13:38.966667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.296  qpair failed and we were unable to recover it.
00:32:23.296  [2024-12-10 00:13:38.976568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.296  [2024-12-10 00:13:38.976631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.296  [2024-12-10 00:13:38.976644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.296  [2024-12-10 00:13:38.976651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.296  [2024-12-10 00:13:38.976657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.296  [2024-12-10 00:13:38.976672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.296  qpair failed and we were unable to recover it.
00:32:23.296  [2024-12-10 00:13:38.986614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.296  [2024-12-10 00:13:38.986683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.296  [2024-12-10 00:13:38.986697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.296  [2024-12-10 00:13:38.986703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.296  [2024-12-10 00:13:38.986709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.296  [2024-12-10 00:13:38.986725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.296  qpair failed and we were unable to recover it.
00:32:23.296  [2024-12-10 00:13:38.996661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.296  [2024-12-10 00:13:38.996721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.296  [2024-12-10 00:13:38.996734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.296  [2024-12-10 00:13:38.996741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.296  [2024-12-10 00:13:38.996747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.296  [2024-12-10 00:13:38.996761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.296  qpair failed and we were unable to recover it.
00:32:23.296  [2024-12-10 00:13:39.006719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.296  [2024-12-10 00:13:39.006818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.296  [2024-12-10 00:13:39.006831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.296  [2024-12-10 00:13:39.006837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.296  [2024-12-10 00:13:39.006843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.296  [2024-12-10 00:13:39.006857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.297  qpair failed and we were unable to recover it.
00:32:23.297  [2024-12-10 00:13:39.016707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.297  [2024-12-10 00:13:39.016763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.297  [2024-12-10 00:13:39.016777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.297  [2024-12-10 00:13:39.016783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.297  [2024-12-10 00:13:39.016789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.297  [2024-12-10 00:13:39.016804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.297  qpair failed and we were unable to recover it.
00:32:23.297  [2024-12-10 00:13:39.026727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.297  [2024-12-10 00:13:39.026780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.297  [2024-12-10 00:13:39.026793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.297  [2024-12-10 00:13:39.026800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.297  [2024-12-10 00:13:39.026806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.297  [2024-12-10 00:13:39.026820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.297  qpair failed and we were unable to recover it.
00:32:23.297  [2024-12-10 00:13:39.036757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.297  [2024-12-10 00:13:39.036805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.297  [2024-12-10 00:13:39.036821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.297  [2024-12-10 00:13:39.036827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.297  [2024-12-10 00:13:39.036833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.297  [2024-12-10 00:13:39.036847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.297  qpair failed and we were unable to recover it.
00:32:23.297  [2024-12-10 00:13:39.046794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.297  [2024-12-10 00:13:39.046853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.297  [2024-12-10 00:13:39.046867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.297  [2024-12-10 00:13:39.046873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.297  [2024-12-10 00:13:39.046879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.297  [2024-12-10 00:13:39.046893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.297  qpair failed and we were unable to recover it.
00:32:23.297  [2024-12-10 00:13:39.056808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.297  [2024-12-10 00:13:39.056864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.297  [2024-12-10 00:13:39.056877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.297  [2024-12-10 00:13:39.056883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.297  [2024-12-10 00:13:39.056889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.297  [2024-12-10 00:13:39.056903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.297  qpair failed and we were unable to recover it.
00:32:23.297  [2024-12-10 00:13:39.066838] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.297  [2024-12-10 00:13:39.066890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.297  [2024-12-10 00:13:39.066903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.297  [2024-12-10 00:13:39.066909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.297  [2024-12-10 00:13:39.066915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.297  [2024-12-10 00:13:39.066929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.297  qpair failed and we were unable to recover it.
00:32:23.297  [2024-12-10 00:13:39.076859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.297  [2024-12-10 00:13:39.076908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.297  [2024-12-10 00:13:39.076921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.297  [2024-12-10 00:13:39.076927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.297  [2024-12-10 00:13:39.076936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.297  [2024-12-10 00:13:39.076951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.297  qpair failed and we were unable to recover it.
00:32:23.297  [2024-12-10 00:13:39.086946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.297  [2024-12-10 00:13:39.087000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.297  [2024-12-10 00:13:39.087013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.297  [2024-12-10 00:13:39.087019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.297  [2024-12-10 00:13:39.087025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.297  [2024-12-10 00:13:39.087040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.297  qpair failed and we were unable to recover it.
00:32:23.297  [2024-12-10 00:13:39.096921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.297  [2024-12-10 00:13:39.096994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.297  [2024-12-10 00:13:39.097008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.297  [2024-12-10 00:13:39.097015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.297  [2024-12-10 00:13:39.097020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.297  [2024-12-10 00:13:39.097035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.297  qpair failed and we were unable to recover it.
00:32:23.297  [2024-12-10 00:13:39.106934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.297  [2024-12-10 00:13:39.106991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.297  [2024-12-10 00:13:39.107004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.297  [2024-12-10 00:13:39.107010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.297  [2024-12-10 00:13:39.107016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.297  [2024-12-10 00:13:39.107030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.297  qpair failed and we were unable to recover it.
00:32:23.297  [2024-12-10 00:13:39.117000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.297  [2024-12-10 00:13:39.117055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.297  [2024-12-10 00:13:39.117068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.297  [2024-12-10 00:13:39.117075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.297  [2024-12-10 00:13:39.117081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.298  [2024-12-10 00:13:39.117095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.298  qpair failed and we were unable to recover it.
00:32:23.298  [2024-12-10 00:13:39.127005] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.298  [2024-12-10 00:13:39.127060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.298  [2024-12-10 00:13:39.127073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.298  [2024-12-10 00:13:39.127079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.298  [2024-12-10 00:13:39.127085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.298  [2024-12-10 00:13:39.127100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.298  qpair failed and we were unable to recover it.
00:32:23.619  [2024-12-10 00:13:39.137122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.619  [2024-12-10 00:13:39.137204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.619  [2024-12-10 00:13:39.137219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.619  [2024-12-10 00:13:39.137227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.619  [2024-12-10 00:13:39.137233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.619  [2024-12-10 00:13:39.137250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.619  qpair failed and we were unable to recover it.
00:32:23.619  [2024-12-10 00:13:39.147074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.619  [2024-12-10 00:13:39.147181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.619  [2024-12-10 00:13:39.147195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.619  [2024-12-10 00:13:39.147201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.619  [2024-12-10 00:13:39.147207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.619  [2024-12-10 00:13:39.147223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.619  qpair failed and we were unable to recover it.
00:32:23.619  [2024-12-10 00:13:39.157124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.619  [2024-12-10 00:13:39.157179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.619  [2024-12-10 00:13:39.157192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.619  [2024-12-10 00:13:39.157198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.619  [2024-12-10 00:13:39.157204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.619  [2024-12-10 00:13:39.157219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.619  qpair failed and we were unable to recover it.
00:32:23.619  [2024-12-10 00:13:39.167134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.619  [2024-12-10 00:13:39.167209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.619  [2024-12-10 00:13:39.167228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.619  [2024-12-10 00:13:39.167234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.619  [2024-12-10 00:13:39.167240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.619  [2024-12-10 00:13:39.167254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.619  qpair failed and we were unable to recover it.
00:32:23.619  [2024-12-10 00:13:39.177144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.620  [2024-12-10 00:13:39.177203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.620  [2024-12-10 00:13:39.177216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.620  [2024-12-10 00:13:39.177222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.620  [2024-12-10 00:13:39.177228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.620  [2024-12-10 00:13:39.177243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.620  qpair failed and we were unable to recover it.
00:32:23.620  [2024-12-10 00:13:39.187179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.620  [2024-12-10 00:13:39.187244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.620  [2024-12-10 00:13:39.187258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.620  [2024-12-10 00:13:39.187265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.620  [2024-12-10 00:13:39.187271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.620  [2024-12-10 00:13:39.187287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.620  qpair failed and we were unable to recover it.
00:32:23.620  [2024-12-10 00:13:39.197203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.620  [2024-12-10 00:13:39.197253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.620  [2024-12-10 00:13:39.197267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.620  [2024-12-10 00:13:39.197274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.620  [2024-12-10 00:13:39.197280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.620  [2024-12-10 00:13:39.197295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.620  qpair failed and we were unable to recover it.
00:32:23.620  [2024-12-10 00:13:39.207237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.620  [2024-12-10 00:13:39.207291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.620  [2024-12-10 00:13:39.207304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.620  [2024-12-10 00:13:39.207311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.620  [2024-12-10 00:13:39.207319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.620  [2024-12-10 00:13:39.207333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.620  qpair failed and we were unable to recover it.
00:32:23.620  [2024-12-10 00:13:39.217299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.620  [2024-12-10 00:13:39.217351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.620  [2024-12-10 00:13:39.217364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.620  [2024-12-10 00:13:39.217370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.620  [2024-12-10 00:13:39.217376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.620  [2024-12-10 00:13:39.217390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.620  qpair failed and we were unable to recover it.
00:32:23.620  [2024-12-10 00:13:39.227284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.620  [2024-12-10 00:13:39.227343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.620  [2024-12-10 00:13:39.227356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.620  [2024-12-10 00:13:39.227362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.620  [2024-12-10 00:13:39.227368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.620  [2024-12-10 00:13:39.227382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.620  qpair failed and we were unable to recover it.
00:32:23.620  [2024-12-10 00:13:39.237320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.620  [2024-12-10 00:13:39.237403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.620  [2024-12-10 00:13:39.237416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.620  [2024-12-10 00:13:39.237422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.620  [2024-12-10 00:13:39.237428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.620  [2024-12-10 00:13:39.237442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.620  qpair failed and we were unable to recover it.
00:32:23.620  [2024-12-10 00:13:39.247367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.620  [2024-12-10 00:13:39.247439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.620  [2024-12-10 00:13:39.247452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.620  [2024-12-10 00:13:39.247458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.620  [2024-12-10 00:13:39.247463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.620  [2024-12-10 00:13:39.247478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.620  qpair failed and we were unable to recover it.
00:32:23.620  [2024-12-10 00:13:39.257372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.620  [2024-12-10 00:13:39.257452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.620  [2024-12-10 00:13:39.257465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.620  [2024-12-10 00:13:39.257471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.620  [2024-12-10 00:13:39.257477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.620  [2024-12-10 00:13:39.257491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.620  qpair failed and we were unable to recover it.
00:32:23.620  [2024-12-10 00:13:39.267402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.620  [2024-12-10 00:13:39.267459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.620  [2024-12-10 00:13:39.267472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.620  [2024-12-10 00:13:39.267478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.620  [2024-12-10 00:13:39.267484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.620  [2024-12-10 00:13:39.267498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.620  qpair failed and we were unable to recover it.
00:32:23.620  [2024-12-10 00:13:39.277447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.620  [2024-12-10 00:13:39.277500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.620  [2024-12-10 00:13:39.277512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.620  [2024-12-10 00:13:39.277518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.620  [2024-12-10 00:13:39.277524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.620  [2024-12-10 00:13:39.277539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.621  qpair failed and we were unable to recover it.
00:32:23.621  [2024-12-10 00:13:39.287465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.621  [2024-12-10 00:13:39.287532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.621  [2024-12-10 00:13:39.287545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.621  [2024-12-10 00:13:39.287551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.621  [2024-12-10 00:13:39.287557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.621  [2024-12-10 00:13:39.287571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.621  qpair failed and we were unable to recover it.
00:32:23.621  [2024-12-10 00:13:39.297490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.621  [2024-12-10 00:13:39.297546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.621  [2024-12-10 00:13:39.297562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.621  [2024-12-10 00:13:39.297569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.621  [2024-12-10 00:13:39.297575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.621  [2024-12-10 00:13:39.297589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.621  qpair failed and we were unable to recover it.
00:32:23.621  [2024-12-10 00:13:39.307513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.621  [2024-12-10 00:13:39.307570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.621  [2024-12-10 00:13:39.307583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.621  [2024-12-10 00:13:39.307590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.621  [2024-12-10 00:13:39.307596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.621  [2024-12-10 00:13:39.307610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.621  qpair failed and we were unable to recover it.
00:32:23.621  [2024-12-10 00:13:39.317577] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.621  [2024-12-10 00:13:39.317631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.621  [2024-12-10 00:13:39.317644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.621  [2024-12-10 00:13:39.317650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.621  [2024-12-10 00:13:39.317656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.621  [2024-12-10 00:13:39.317671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.621  qpair failed and we were unable to recover it.
00:32:23.621  [2024-12-10 00:13:39.327589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.621  [2024-12-10 00:13:39.327646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.621  [2024-12-10 00:13:39.327659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.621  [2024-12-10 00:13:39.327665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.621  [2024-12-10 00:13:39.327671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.621  [2024-12-10 00:13:39.327685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.621  qpair failed and we were unable to recover it.
00:32:23.621  [2024-12-10 00:13:39.337607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.621  [2024-12-10 00:13:39.337657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.621  [2024-12-10 00:13:39.337671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.621  [2024-12-10 00:13:39.337680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.621  [2024-12-10 00:13:39.337686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.621  [2024-12-10 00:13:39.337700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.621  qpair failed and we were unable to recover it.
00:32:23.621  [2024-12-10 00:13:39.347694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.621  [2024-12-10 00:13:39.347778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.621  [2024-12-10 00:13:39.347792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.621  [2024-12-10 00:13:39.347798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.621  [2024-12-10 00:13:39.347804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.621  [2024-12-10 00:13:39.347818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.621  qpair failed and we were unable to recover it.
00:32:23.621  [2024-12-10 00:13:39.357583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.621  [2024-12-10 00:13:39.357633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.621  [2024-12-10 00:13:39.357646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.621  [2024-12-10 00:13:39.357652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.621  [2024-12-10 00:13:39.357658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.621  [2024-12-10 00:13:39.357672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.621  qpair failed and we were unable to recover it.
00:32:23.621  [2024-12-10 00:13:39.367700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.621  [2024-12-10 00:13:39.367755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.621  [2024-12-10 00:13:39.367767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.621  [2024-12-10 00:13:39.367774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.621  [2024-12-10 00:13:39.367779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.621  [2024-12-10 00:13:39.367793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.621  qpair failed and we were unable to recover it.
00:32:23.621  [2024-12-10 00:13:39.377787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.621  [2024-12-10 00:13:39.377844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.621  [2024-12-10 00:13:39.377856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.621  [2024-12-10 00:13:39.377862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.621  [2024-12-10 00:13:39.377868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.621  [2024-12-10 00:13:39.377882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.621  qpair failed and we were unable to recover it.
00:32:23.621  [2024-12-10 00:13:39.387752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.621  [2024-12-10 00:13:39.387808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.621  [2024-12-10 00:13:39.387821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.621  [2024-12-10 00:13:39.387828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.621  [2024-12-10 00:13:39.387833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.622  [2024-12-10 00:13:39.387847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.622  qpair failed and we were unable to recover it.
00:32:23.622  [2024-12-10 00:13:39.397772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.622  [2024-12-10 00:13:39.397837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.622  [2024-12-10 00:13:39.397849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.622  [2024-12-10 00:13:39.397856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.622  [2024-12-10 00:13:39.397862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.622  [2024-12-10 00:13:39.397876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.622  qpair failed and we were unable to recover it.
00:32:23.622  [2024-12-10 00:13:39.407795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.622  [2024-12-10 00:13:39.407848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.622  [2024-12-10 00:13:39.407862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.622  [2024-12-10 00:13:39.407868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.622  [2024-12-10 00:13:39.407875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.622  [2024-12-10 00:13:39.407889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.622  qpair failed and we were unable to recover it.
00:32:23.622  [2024-12-10 00:13:39.417836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.622  [2024-12-10 00:13:39.417891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.622  [2024-12-10 00:13:39.417904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.622  [2024-12-10 00:13:39.417910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.622  [2024-12-10 00:13:39.417916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.622  [2024-12-10 00:13:39.417930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.622  qpair failed and we were unable to recover it.
00:32:23.622  [2024-12-10 00:13:39.427851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.622  [2024-12-10 00:13:39.427902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.622  [2024-12-10 00:13:39.427915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.622  [2024-12-10 00:13:39.427922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.622  [2024-12-10 00:13:39.427927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.622  [2024-12-10 00:13:39.427942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.622  qpair failed and we were unable to recover it.
00:32:23.622  [2024-12-10 00:13:39.437911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.622  [2024-12-10 00:13:39.437963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.622  [2024-12-10 00:13:39.437977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.622  [2024-12-10 00:13:39.437983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.622  [2024-12-10 00:13:39.437989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.622  [2024-12-10 00:13:39.438003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.622  qpair failed and we were unable to recover it.
00:32:23.622  [2024-12-10 00:13:39.447916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.622  [2024-12-10 00:13:39.447970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.622  [2024-12-10 00:13:39.447983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.622  [2024-12-10 00:13:39.447989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.622  [2024-12-10 00:13:39.447994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.622  [2024-12-10 00:13:39.448009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.622  qpair failed and we were unable to recover it.
00:32:23.622  [2024-12-10 00:13:39.457960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.622  [2024-12-10 00:13:39.458062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.622  [2024-12-10 00:13:39.458076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.622  [2024-12-10 00:13:39.458082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.622  [2024-12-10 00:13:39.458088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.622  [2024-12-10 00:13:39.458102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.622  qpair failed and we were unable to recover it.
00:32:23.622  [2024-12-10 00:13:39.467974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.622  [2024-12-10 00:13:39.468025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.622  [2024-12-10 00:13:39.468039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.622  [2024-12-10 00:13:39.468048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.622  [2024-12-10 00:13:39.468053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.622  [2024-12-10 00:13:39.468068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.622  qpair failed and we were unable to recover it.
00:32:23.882  [2024-12-10 00:13:39.478023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.882  [2024-12-10 00:13:39.478091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.882  [2024-12-10 00:13:39.478104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.882  [2024-12-10 00:13:39.478111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.882  [2024-12-10 00:13:39.478117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.882  [2024-12-10 00:13:39.478132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.882  qpair failed and we were unable to recover it.
00:32:23.882  [2024-12-10 00:13:39.488036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.882  [2024-12-10 00:13:39.488091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.882  [2024-12-10 00:13:39.488104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.882  [2024-12-10 00:13:39.488110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.882  [2024-12-10 00:13:39.488116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.882  [2024-12-10 00:13:39.488130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.882  qpair failed and we were unable to recover it.
00:32:23.882  [2024-12-10 00:13:39.498090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.882  [2024-12-10 00:13:39.498146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.882  [2024-12-10 00:13:39.498159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.882  [2024-12-10 00:13:39.498168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.882  [2024-12-10 00:13:39.498175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.883  [2024-12-10 00:13:39.498189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.883  qpair failed and we were unable to recover it.
00:32:23.883  [2024-12-10 00:13:39.508079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.883  [2024-12-10 00:13:39.508133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.883  [2024-12-10 00:13:39.508146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.883  [2024-12-10 00:13:39.508152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.883  [2024-12-10 00:13:39.508158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.883  [2024-12-10 00:13:39.508179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.883  qpair failed and we were unable to recover it.
00:32:23.883  [2024-12-10 00:13:39.518109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.883  [2024-12-10 00:13:39.518164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.883  [2024-12-10 00:13:39.518180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.883  [2024-12-10 00:13:39.518187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.883  [2024-12-10 00:13:39.518193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.883  [2024-12-10 00:13:39.518207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.883  qpair failed and we were unable to recover it.
00:32:23.883  [2024-12-10 00:13:39.528145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.883  [2024-12-10 00:13:39.528210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.883  [2024-12-10 00:13:39.528223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.883  [2024-12-10 00:13:39.528230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.883  [2024-12-10 00:13:39.528236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.883  [2024-12-10 00:13:39.528250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.883  qpair failed and we were unable to recover it.
00:32:23.883  [2024-12-10 00:13:39.538202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.883  [2024-12-10 00:13:39.538271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.883  [2024-12-10 00:13:39.538284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.883  [2024-12-10 00:13:39.538290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.883  [2024-12-10 00:13:39.538296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.883  [2024-12-10 00:13:39.538311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.883  qpair failed and we were unable to recover it.
00:32:23.883  [2024-12-10 00:13:39.548133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.883  [2024-12-10 00:13:39.548183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.883  [2024-12-10 00:13:39.548196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.883  [2024-12-10 00:13:39.548203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.883  [2024-12-10 00:13:39.548209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.883  [2024-12-10 00:13:39.548223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.883  qpair failed and we were unable to recover it.
00:32:23.883  [2024-12-10 00:13:39.558223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.883  [2024-12-10 00:13:39.558277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.883  [2024-12-10 00:13:39.558290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.883  [2024-12-10 00:13:39.558297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.883  [2024-12-10 00:13:39.558303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.883  [2024-12-10 00:13:39.558317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.883  qpair failed and we were unable to recover it.
00:32:23.883  [2024-12-10 00:13:39.568273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.883  [2024-12-10 00:13:39.568335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.883  [2024-12-10 00:13:39.568347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.883  [2024-12-10 00:13:39.568354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.883  [2024-12-10 00:13:39.568360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.883  [2024-12-10 00:13:39.568374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.883  qpair failed and we were unable to recover it.
00:32:23.883  [2024-12-10 00:13:39.578313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.883  [2024-12-10 00:13:39.578368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.883  [2024-12-10 00:13:39.578382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.883  [2024-12-10 00:13:39.578388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.883  [2024-12-10 00:13:39.578394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.883  [2024-12-10 00:13:39.578408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.883  qpair failed and we were unable to recover it.
00:32:23.883  [2024-12-10 00:13:39.588321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.883  [2024-12-10 00:13:39.588376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.883  [2024-12-10 00:13:39.588389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.883  [2024-12-10 00:13:39.588395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.883  [2024-12-10 00:13:39.588401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.883  [2024-12-10 00:13:39.588415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.883  qpair failed and we were unable to recover it.
00:32:23.883  [2024-12-10 00:13:39.598379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.883  [2024-12-10 00:13:39.598440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.883  [2024-12-10 00:13:39.598456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.883  [2024-12-10 00:13:39.598463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.883  [2024-12-10 00:13:39.598468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.883  [2024-12-10 00:13:39.598483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.883  qpair failed and we were unable to recover it.
00:32:23.883  [2024-12-10 00:13:39.608384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.883  [2024-12-10 00:13:39.608483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.883  [2024-12-10 00:13:39.608496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.884  [2024-12-10 00:13:39.608502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.884  [2024-12-10 00:13:39.608508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.884  [2024-12-10 00:13:39.608522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.884  qpair failed and we were unable to recover it.
00:32:23.884  [2024-12-10 00:13:39.618409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.884  [2024-12-10 00:13:39.618461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.884  [2024-12-10 00:13:39.618474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.884  [2024-12-10 00:13:39.618480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.884  [2024-12-10 00:13:39.618486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.884  [2024-12-10 00:13:39.618501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.884  qpair failed and we were unable to recover it.
00:32:23.884  [2024-12-10 00:13:39.628469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.884  [2024-12-10 00:13:39.628541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.884  [2024-12-10 00:13:39.628553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.884  [2024-12-10 00:13:39.628560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.884  [2024-12-10 00:13:39.628565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.884  [2024-12-10 00:13:39.628580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.884  qpair failed and we were unable to recover it.
00:32:23.884  [2024-12-10 00:13:39.638456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.884  [2024-12-10 00:13:39.638509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.884  [2024-12-10 00:13:39.638522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.884  [2024-12-10 00:13:39.638529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.884  [2024-12-10 00:13:39.638538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.884  [2024-12-10 00:13:39.638553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.884  qpair failed and we were unable to recover it.
00:32:23.884  [2024-12-10 00:13:39.648511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.884  [2024-12-10 00:13:39.648565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.884  [2024-12-10 00:13:39.648578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.884  [2024-12-10 00:13:39.648585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.884  [2024-12-10 00:13:39.648591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.884  [2024-12-10 00:13:39.648606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.884  qpair failed and we were unable to recover it.
00:32:23.884  [2024-12-10 00:13:39.658617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.884  [2024-12-10 00:13:39.658682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.884  [2024-12-10 00:13:39.658695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.884  [2024-12-10 00:13:39.658701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.884  [2024-12-10 00:13:39.658708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.884  [2024-12-10 00:13:39.658722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.884  qpair failed and we were unable to recover it.
00:32:23.884  [2024-12-10 00:13:39.668585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.884  [2024-12-10 00:13:39.668639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.884  [2024-12-10 00:13:39.668652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.884  [2024-12-10 00:13:39.668658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.884  [2024-12-10 00:13:39.668664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.884  [2024-12-10 00:13:39.668679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.884  qpair failed and we were unable to recover it.
00:32:23.884  [2024-12-10 00:13:39.678644] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.884  [2024-12-10 00:13:39.678741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.884  [2024-12-10 00:13:39.678753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.884  [2024-12-10 00:13:39.678760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.884  [2024-12-10 00:13:39.678765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.884  [2024-12-10 00:13:39.678779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.884  qpair failed and we were unable to recover it.
00:32:23.884  [2024-12-10 00:13:39.688562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.884  [2024-12-10 00:13:39.688617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.884  [2024-12-10 00:13:39.688630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.884  [2024-12-10 00:13:39.688636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.884  [2024-12-10 00:13:39.688643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.884  [2024-12-10 00:13:39.688657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.884  qpair failed and we were unable to recover it.
00:32:23.884  [2024-12-10 00:13:39.698623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.884  [2024-12-10 00:13:39.698680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.884  [2024-12-10 00:13:39.698693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.884  [2024-12-10 00:13:39.698699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.885  [2024-12-10 00:13:39.698705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.885  [2024-12-10 00:13:39.698719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.885  qpair failed and we were unable to recover it.
00:32:23.885  [2024-12-10 00:13:39.708609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.885  [2024-12-10 00:13:39.708663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.885  [2024-12-10 00:13:39.708676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.885  [2024-12-10 00:13:39.708682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.885  [2024-12-10 00:13:39.708688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.885  [2024-12-10 00:13:39.708702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.885  qpair failed and we were unable to recover it.
00:32:23.885  [2024-12-10 00:13:39.718705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.885  [2024-12-10 00:13:39.718758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.885  [2024-12-10 00:13:39.718770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.885  [2024-12-10 00:13:39.718777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.885  [2024-12-10 00:13:39.718782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.885  [2024-12-10 00:13:39.718797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.885  qpair failed and we were unable to recover it.
00:32:23.885  [2024-12-10 00:13:39.728710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.885  [2024-12-10 00:13:39.728763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.885  [2024-12-10 00:13:39.728779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.885  [2024-12-10 00:13:39.728787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.885  [2024-12-10 00:13:39.728793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.885  [2024-12-10 00:13:39.728808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.885  qpair failed and we were unable to recover it.
00:32:23.885  [2024-12-10 00:13:39.738761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:23.885  [2024-12-10 00:13:39.738817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:23.885  [2024-12-10 00:13:39.738830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:23.885  [2024-12-10 00:13:39.738836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:23.885  [2024-12-10 00:13:39.738842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:23.885  [2024-12-10 00:13:39.738856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:23.885  qpair failed and we were unable to recover it.
00:32:24.144  [2024-12-10 00:13:39.748693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.144  [2024-12-10 00:13:39.748743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.144  [2024-12-10 00:13:39.748756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.144  [2024-12-10 00:13:39.748762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.144  [2024-12-10 00:13:39.748768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.144  [2024-12-10 00:13:39.748782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.144  qpair failed and we were unable to recover it.
00:32:24.144  [2024-12-10 00:13:39.758808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.144  [2024-12-10 00:13:39.758863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.144  [2024-12-10 00:13:39.758875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.144  [2024-12-10 00:13:39.758881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.144  [2024-12-10 00:13:39.758887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.144  [2024-12-10 00:13:39.758901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.144  qpair failed and we were unable to recover it.
00:32:24.144  [2024-12-10 00:13:39.768877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.144  [2024-12-10 00:13:39.768935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.144  [2024-12-10 00:13:39.768948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.144  [2024-12-10 00:13:39.768954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.144  [2024-12-10 00:13:39.768963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.144  [2024-12-10 00:13:39.768977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.144  qpair failed and we were unable to recover it.
00:32:24.144  [2024-12-10 00:13:39.778911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.144  [2024-12-10 00:13:39.779007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.144  [2024-12-10 00:13:39.779020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.144  [2024-12-10 00:13:39.779027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.144  [2024-12-10 00:13:39.779033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.144  [2024-12-10 00:13:39.779047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.144  qpair failed and we were unable to recover it.
00:32:24.144  [2024-12-10 00:13:39.788929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.144  [2024-12-10 00:13:39.788999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.144  [2024-12-10 00:13:39.789012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.144  [2024-12-10 00:13:39.789018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.144  [2024-12-10 00:13:39.789024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.145  [2024-12-10 00:13:39.789039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.145  qpair failed and we were unable to recover it.
00:32:24.145  [2024-12-10 00:13:39.798926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.145  [2024-12-10 00:13:39.798976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.145  [2024-12-10 00:13:39.798988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.145  [2024-12-10 00:13:39.798995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.145  [2024-12-10 00:13:39.799000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.145  [2024-12-10 00:13:39.799015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.145  qpair failed and we were unable to recover it.
00:32:24.145  [2024-12-10 00:13:39.808993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.145  [2024-12-10 00:13:39.809081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.145  [2024-12-10 00:13:39.809095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.145  [2024-12-10 00:13:39.809102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.145  [2024-12-10 00:13:39.809108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.145  [2024-12-10 00:13:39.809123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.145  qpair failed and we were unable to recover it.
00:32:24.145  [2024-12-10 00:13:39.818976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.145  [2024-12-10 00:13:39.819032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.145  [2024-12-10 00:13:39.819045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.145  [2024-12-10 00:13:39.819051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.145  [2024-12-10 00:13:39.819058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.145  [2024-12-10 00:13:39.819072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.145  qpair failed and we were unable to recover it.
00:32:24.145  [2024-12-10 00:13:39.829000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.145  [2024-12-10 00:13:39.829051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.145  [2024-12-10 00:13:39.829064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.145  [2024-12-10 00:13:39.829071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.145  [2024-12-10 00:13:39.829077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.145  [2024-12-10 00:13:39.829091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.145  qpair failed and we were unable to recover it.
00:32:24.145  [2024-12-10 00:13:39.839076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.145  [2024-12-10 00:13:39.839125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.145  [2024-12-10 00:13:39.839137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.145  [2024-12-10 00:13:39.839143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.145  [2024-12-10 00:13:39.839149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.145  [2024-12-10 00:13:39.839164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.145  qpair failed and we were unable to recover it.
00:32:24.145  [2024-12-10 00:13:39.849066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.145  [2024-12-10 00:13:39.849126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.145  [2024-12-10 00:13:39.849139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.145  [2024-12-10 00:13:39.849146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.145  [2024-12-10 00:13:39.849151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.145  [2024-12-10 00:13:39.849169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.145  qpair failed and we were unable to recover it.
00:32:24.145  [2024-12-10 00:13:39.859065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.145  [2024-12-10 00:13:39.859136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.145  [2024-12-10 00:13:39.859154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.145  [2024-12-10 00:13:39.859161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.145  [2024-12-10 00:13:39.859172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.145  [2024-12-10 00:13:39.859187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.145  qpair failed and we were unable to recover it.
00:32:24.145  [2024-12-10 00:13:39.869120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.145  [2024-12-10 00:13:39.869181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.145  [2024-12-10 00:13:39.869194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.145  [2024-12-10 00:13:39.869200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.145  [2024-12-10 00:13:39.869206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.145  [2024-12-10 00:13:39.869221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.145  qpair failed and we were unable to recover it.
00:32:24.145  [2024-12-10 00:13:39.879163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.145  [2024-12-10 00:13:39.879221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.145  [2024-12-10 00:13:39.879234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.145  [2024-12-10 00:13:39.879240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.145  [2024-12-10 00:13:39.879245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.145  [2024-12-10 00:13:39.879260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.145  qpair failed and we were unable to recover it.
00:32:24.145  [2024-12-10 00:13:39.889120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.145  [2024-12-10 00:13:39.889180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.145  [2024-12-10 00:13:39.889193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.145  [2024-12-10 00:13:39.889200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.145  [2024-12-10 00:13:39.889206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.145  [2024-12-10 00:13:39.889220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.145  qpair failed and we were unable to recover it.
00:32:24.145  [2024-12-10 00:13:39.899211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.145  [2024-12-10 00:13:39.899264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.145  [2024-12-10 00:13:39.899278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.145  [2024-12-10 00:13:39.899288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.146  [2024-12-10 00:13:39.899294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.146  [2024-12-10 00:13:39.899309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.146  qpair failed and we were unable to recover it.
00:32:24.146  [2024-12-10 00:13:39.909228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.146  [2024-12-10 00:13:39.909281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.146  [2024-12-10 00:13:39.909294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.146  [2024-12-10 00:13:39.909301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.146  [2024-12-10 00:13:39.909307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.146  [2024-12-10 00:13:39.909322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.146  qpair failed and we were unable to recover it.
00:32:24.146  [2024-12-10 00:13:39.919280] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.146  [2024-12-10 00:13:39.919338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.146  [2024-12-10 00:13:39.919351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.146  [2024-12-10 00:13:39.919357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.146  [2024-12-10 00:13:39.919363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.146  [2024-12-10 00:13:39.919378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.146  qpair failed and we were unable to recover it.
00:32:24.146  [2024-12-10 00:13:39.929284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.146  [2024-12-10 00:13:39.929339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.146  [2024-12-10 00:13:39.929351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.146  [2024-12-10 00:13:39.929358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.146  [2024-12-10 00:13:39.929363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.146  [2024-12-10 00:13:39.929378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.146  qpair failed and we were unable to recover it.
00:32:24.146  [2024-12-10 00:13:39.939315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.146  [2024-12-10 00:13:39.939367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.146  [2024-12-10 00:13:39.939380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.146  [2024-12-10 00:13:39.939386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.146  [2024-12-10 00:13:39.939392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.146  [2024-12-10 00:13:39.939407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.146  qpair failed and we were unable to recover it.
00:32:24.146  [2024-12-10 00:13:39.949333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.146  [2024-12-10 00:13:39.949396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.146  [2024-12-10 00:13:39.949409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.146  [2024-12-10 00:13:39.949416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.146  [2024-12-10 00:13:39.949421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.146  [2024-12-10 00:13:39.949436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.146  qpair failed and we were unable to recover it.
00:32:24.146  [2024-12-10 00:13:39.959375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.146  [2024-12-10 00:13:39.959425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.146  [2024-12-10 00:13:39.959438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.146  [2024-12-10 00:13:39.959445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.146  [2024-12-10 00:13:39.959451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.146  [2024-12-10 00:13:39.959466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.146  qpair failed and we were unable to recover it.
00:32:24.146  [2024-12-10 00:13:39.969337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.146  [2024-12-10 00:13:39.969392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.146  [2024-12-10 00:13:39.969404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.146  [2024-12-10 00:13:39.969410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.146  [2024-12-10 00:13:39.969416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.146  [2024-12-10 00:13:39.969431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.146  qpair failed and we were unable to recover it.
00:32:24.146  [2024-12-10 00:13:39.979369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.146  [2024-12-10 00:13:39.979421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.146  [2024-12-10 00:13:39.979434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.146  [2024-12-10 00:13:39.979440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.146  [2024-12-10 00:13:39.979446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.146  [2024-12-10 00:13:39.979460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.146  qpair failed and we were unable to recover it.
00:32:24.146  [2024-12-10 00:13:39.989456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.146  [2024-12-10 00:13:39.989511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.146  [2024-12-10 00:13:39.989525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.146  [2024-12-10 00:13:39.989531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.146  [2024-12-10 00:13:39.989537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.146  [2024-12-10 00:13:39.989551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.146  qpair failed and we were unable to recover it.
00:32:24.146  [2024-12-10 00:13:39.999535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.146  [2024-12-10 00:13:39.999635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.146  [2024-12-10 00:13:39.999648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.146  [2024-12-10 00:13:39.999654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.146  [2024-12-10 00:13:39.999660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.146  [2024-12-10 00:13:39.999674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.146  qpair failed and we were unable to recover it.
00:32:24.406  [2024-12-10 00:13:40.009549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.406  [2024-12-10 00:13:40.009606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.406  [2024-12-10 00:13:40.009620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.406  [2024-12-10 00:13:40.009627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.406  [2024-12-10 00:13:40.009633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.406  [2024-12-10 00:13:40.009648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.406  qpair failed and we were unable to recover it.
00:32:24.406  [2024-12-10 00:13:40.019580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.406  [2024-12-10 00:13:40.019642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.406  [2024-12-10 00:13:40.019661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.406  [2024-12-10 00:13:40.019669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.406  [2024-12-10 00:13:40.019682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.406  [2024-12-10 00:13:40.019700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.406  qpair failed and we were unable to recover it.
00:32:24.406  [2024-12-10 00:13:40.029575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.406  [2024-12-10 00:13:40.029628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.406  [2024-12-10 00:13:40.029642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.406  [2024-12-10 00:13:40.029653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.406  [2024-12-10 00:13:40.029659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.406  [2024-12-10 00:13:40.029674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.406  qpair failed and we were unable to recover it.
00:32:24.406  [2024-12-10 00:13:40.039656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.406  [2024-12-10 00:13:40.039754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.406  [2024-12-10 00:13:40.039767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.406  [2024-12-10 00:13:40.039773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.406  [2024-12-10 00:13:40.039779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.406  [2024-12-10 00:13:40.039794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.406  qpair failed and we were unable to recover it.
00:32:24.406  [2024-12-10 00:13:40.049645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.406  [2024-12-10 00:13:40.049708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.406  [2024-12-10 00:13:40.049723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.406  [2024-12-10 00:13:40.049729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.406  [2024-12-10 00:13:40.049735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.406  [2024-12-10 00:13:40.049750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.406  qpair failed and we were unable to recover it.
00:32:24.406  [2024-12-10 00:13:40.059599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.406  [2024-12-10 00:13:40.059649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.406  [2024-12-10 00:13:40.059663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.406  [2024-12-10 00:13:40.059669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.406  [2024-12-10 00:13:40.059675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.406  [2024-12-10 00:13:40.059690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.406  qpair failed and we were unable to recover it.
00:32:24.406  [2024-12-10 00:13:40.069689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.406  [2024-12-10 00:13:40.069768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.406  [2024-12-10 00:13:40.069782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.406  [2024-12-10 00:13:40.069788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.406  [2024-12-10 00:13:40.069794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.406  [2024-12-10 00:13:40.069812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.406  qpair failed and we were unable to recover it.
00:32:24.406  [2024-12-10 00:13:40.079710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.406  [2024-12-10 00:13:40.079763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.406  [2024-12-10 00:13:40.079775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.406  [2024-12-10 00:13:40.079782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.406  [2024-12-10 00:13:40.079788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.406  [2024-12-10 00:13:40.079802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.406  qpair failed and we were unable to recover it.
00:32:24.406  [2024-12-10 00:13:40.089708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.406  [2024-12-10 00:13:40.089785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.406  [2024-12-10 00:13:40.089799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.406  [2024-12-10 00:13:40.089805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.406  [2024-12-10 00:13:40.089811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.406  [2024-12-10 00:13:40.089825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.406  qpair failed and we were unable to recover it.
00:32:24.406  [2024-12-10 00:13:40.099708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.406  [2024-12-10 00:13:40.099767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.406  [2024-12-10 00:13:40.099781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.406  [2024-12-10 00:13:40.099787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.406  [2024-12-10 00:13:40.099793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.406  [2024-12-10 00:13:40.099809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.406  qpair failed and we were unable to recover it.
00:32:24.406  [2024-12-10 00:13:40.109786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.407  [2024-12-10 00:13:40.109841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.407  [2024-12-10 00:13:40.109855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.407  [2024-12-10 00:13:40.109861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.407  [2024-12-10 00:13:40.109867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.407  [2024-12-10 00:13:40.109882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.407  qpair failed and we were unable to recover it.
00:32:24.407  [2024-12-10 00:13:40.119867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.407  [2024-12-10 00:13:40.119923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.407  [2024-12-10 00:13:40.119936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.407  [2024-12-10 00:13:40.119942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.407  [2024-12-10 00:13:40.119948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.407  [2024-12-10 00:13:40.119962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.407  qpair failed and we were unable to recover it.
00:32:24.407  [2024-12-10 00:13:40.129842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.407  [2024-12-10 00:13:40.129901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.407  [2024-12-10 00:13:40.129914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.407  [2024-12-10 00:13:40.129921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.407  [2024-12-10 00:13:40.129927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.407  [2024-12-10 00:13:40.129941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.407  qpair failed and we were unable to recover it.
00:32:24.407  [2024-12-10 00:13:40.139938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.407  [2024-12-10 00:13:40.139998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.407  [2024-12-10 00:13:40.140012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.407  [2024-12-10 00:13:40.140018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.407  [2024-12-10 00:13:40.140024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.407  [2024-12-10 00:13:40.140039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.407  qpair failed and we were unable to recover it.
00:32:24.407  [2024-12-10 00:13:40.149831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.407  [2024-12-10 00:13:40.149896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.407  [2024-12-10 00:13:40.149909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.407  [2024-12-10 00:13:40.149916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.407  [2024-12-10 00:13:40.149921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.407  [2024-12-10 00:13:40.149936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.407  qpair failed and we were unable to recover it.
00:32:24.407  [2024-12-10 00:13:40.159995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.407  [2024-12-10 00:13:40.160043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.407  [2024-12-10 00:13:40.160060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.407  [2024-12-10 00:13:40.160067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.407  [2024-12-10 00:13:40.160073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.407  [2024-12-10 00:13:40.160088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.407  qpair failed and we were unable to recover it.
00:32:24.407  [2024-12-10 00:13:40.170004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.407  [2024-12-10 00:13:40.170062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.407  [2024-12-10 00:13:40.170075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.407  [2024-12-10 00:13:40.170082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.407  [2024-12-10 00:13:40.170087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.407  [2024-12-10 00:13:40.170102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.407  qpair failed and we were unable to recover it.
00:32:24.407  [2024-12-10 00:13:40.179998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.407  [2024-12-10 00:13:40.180056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.407  [2024-12-10 00:13:40.180069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.407  [2024-12-10 00:13:40.180075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.407  [2024-12-10 00:13:40.180081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.407  [2024-12-10 00:13:40.180096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.407  qpair failed and we were unable to recover it.
00:32:24.407  [2024-12-10 00:13:40.190061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.407  [2024-12-10 00:13:40.190111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.407  [2024-12-10 00:13:40.190124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.407  [2024-12-10 00:13:40.190130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.407  [2024-12-10 00:13:40.190136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.407  [2024-12-10 00:13:40.190151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.407  qpair failed and we were unable to recover it.
00:32:24.407  [2024-12-10 00:13:40.200058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.407  [2024-12-10 00:13:40.200111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.407  [2024-12-10 00:13:40.200124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.407  [2024-12-10 00:13:40.200131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.407  [2024-12-10 00:13:40.200139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.407  [2024-12-10 00:13:40.200155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.407  qpair failed and we were unable to recover it.
00:32:24.407  [2024-12-10 00:13:40.210088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.407  [2024-12-10 00:13:40.210144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.407  [2024-12-10 00:13:40.210157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.407  [2024-12-10 00:13:40.210163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.407  [2024-12-10 00:13:40.210173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.407  [2024-12-10 00:13:40.210187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.407  qpair failed and we were unable to recover it.
00:32:24.408  [2024-12-10 00:13:40.220108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.408  [2024-12-10 00:13:40.220164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.408  [2024-12-10 00:13:40.220180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.408  [2024-12-10 00:13:40.220187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.408  [2024-12-10 00:13:40.220193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.408  [2024-12-10 00:13:40.220207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.408  qpair failed and we were unable to recover it.
00:32:24.408  [2024-12-10 00:13:40.230133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.408  [2024-12-10 00:13:40.230216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.408  [2024-12-10 00:13:40.230230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.408  [2024-12-10 00:13:40.230236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.408  [2024-12-10 00:13:40.230243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.408  [2024-12-10 00:13:40.230258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.408  qpair failed and we were unable to recover it.
00:32:24.408  [2024-12-10 00:13:40.240170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.408  [2024-12-10 00:13:40.240222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.408  [2024-12-10 00:13:40.240235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.408  [2024-12-10 00:13:40.240241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.408  [2024-12-10 00:13:40.240247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.408  [2024-12-10 00:13:40.240262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.408  qpair failed and we were unable to recover it.
00:32:24.408  [2024-12-10 00:13:40.250205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.408  [2024-12-10 00:13:40.250259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.408  [2024-12-10 00:13:40.250272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.408  [2024-12-10 00:13:40.250279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.408  [2024-12-10 00:13:40.250285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.408  [2024-12-10 00:13:40.250299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.408  qpair failed and we were unable to recover it.
00:32:24.408  [2024-12-10 00:13:40.260225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.408  [2024-12-10 00:13:40.260308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.408  [2024-12-10 00:13:40.260322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.408  [2024-12-10 00:13:40.260328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.408  [2024-12-10 00:13:40.260334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.408  [2024-12-10 00:13:40.260350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.408  qpair failed and we were unable to recover it.
00:32:24.667  [2024-12-10 00:13:40.270303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.667  [2024-12-10 00:13:40.270406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.667  [2024-12-10 00:13:40.270419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.667  [2024-12-10 00:13:40.270425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.667  [2024-12-10 00:13:40.270431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.667  [2024-12-10 00:13:40.270445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.667  qpair failed and we were unable to recover it.
00:32:24.667  [2024-12-10 00:13:40.280298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.667  [2024-12-10 00:13:40.280352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.667  [2024-12-10 00:13:40.280365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.667  [2024-12-10 00:13:40.280371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.667  [2024-12-10 00:13:40.280377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.667  [2024-12-10 00:13:40.280391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.667  qpair failed and we were unable to recover it.
00:32:24.667  [2024-12-10 00:13:40.290335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.667  [2024-12-10 00:13:40.290398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.667  [2024-12-10 00:13:40.290415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.667  [2024-12-10 00:13:40.290421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.667  [2024-12-10 00:13:40.290427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.667  [2024-12-10 00:13:40.290441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.667  qpair failed and we were unable to recover it.
00:32:24.667  [2024-12-10 00:13:40.300331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.667  [2024-12-10 00:13:40.300386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.667  [2024-12-10 00:13:40.300399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.667  [2024-12-10 00:13:40.300406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.667  [2024-12-10 00:13:40.300412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.667  [2024-12-10 00:13:40.300426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.667  qpair failed and we were unable to recover it.
00:32:24.667  [2024-12-10 00:13:40.310355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.667  [2024-12-10 00:13:40.310411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.667  [2024-12-10 00:13:40.310424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.667  [2024-12-10 00:13:40.310430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.667  [2024-12-10 00:13:40.310436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.667  [2024-12-10 00:13:40.310451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.667  qpair failed and we were unable to recover it.
00:32:24.667  [2024-12-10 00:13:40.320388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.667  [2024-12-10 00:13:40.320443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.667  [2024-12-10 00:13:40.320455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.667  [2024-12-10 00:13:40.320462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.667  [2024-12-10 00:13:40.320467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.667  [2024-12-10 00:13:40.320481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.667  qpair failed and we were unable to recover it.
00:32:24.667  [2024-12-10 00:13:40.330339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.668  [2024-12-10 00:13:40.330397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.668  [2024-12-10 00:13:40.330409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.668  [2024-12-10 00:13:40.330416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.668  [2024-12-10 00:13:40.330425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.668  [2024-12-10 00:13:40.330440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.668  qpair failed and we were unable to recover it.
00:32:24.668  [2024-12-10 00:13:40.340477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.668  [2024-12-10 00:13:40.340542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.668  [2024-12-10 00:13:40.340555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.668  [2024-12-10 00:13:40.340561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.668  [2024-12-10 00:13:40.340568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.668  [2024-12-10 00:13:40.340582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.668  qpair failed and we were unable to recover it.
00:32:24.668  [2024-12-10 00:13:40.350453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.668  [2024-12-10 00:13:40.350502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.668  [2024-12-10 00:13:40.350515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.668  [2024-12-10 00:13:40.350522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.668  [2024-12-10 00:13:40.350527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.668  [2024-12-10 00:13:40.350542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.668  qpair failed and we were unable to recover it.
00:32:24.668  [2024-12-10 00:13:40.360483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.668  [2024-12-10 00:13:40.360583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.668  [2024-12-10 00:13:40.360595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.668  [2024-12-10 00:13:40.360602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.668  [2024-12-10 00:13:40.360607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.668  [2024-12-10 00:13:40.360622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.668  qpair failed and we were unable to recover it.
00:32:24.668  [2024-12-10 00:13:40.370570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.668  [2024-12-10 00:13:40.370626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.668  [2024-12-10 00:13:40.370639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.668  [2024-12-10 00:13:40.370645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.668  [2024-12-10 00:13:40.370651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.668  [2024-12-10 00:13:40.370664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.668  qpair failed and we were unable to recover it.
00:32:24.668  [2024-12-10 00:13:40.380540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.668  [2024-12-10 00:13:40.380595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.668  [2024-12-10 00:13:40.380607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.668  [2024-12-10 00:13:40.380613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.668  [2024-12-10 00:13:40.380619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.668  [2024-12-10 00:13:40.380634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.668  qpair failed and we were unable to recover it.
00:32:24.668  [2024-12-10 00:13:40.390597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.668  [2024-12-10 00:13:40.390684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.668  [2024-12-10 00:13:40.390697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.668  [2024-12-10 00:13:40.390703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.668  [2024-12-10 00:13:40.390709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.668  [2024-12-10 00:13:40.390723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.668  qpair failed and we were unable to recover it.
00:32:24.668  [2024-12-10 00:13:40.400596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.668  [2024-12-10 00:13:40.400648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.668  [2024-12-10 00:13:40.400661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.668  [2024-12-10 00:13:40.400667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.668  [2024-12-10 00:13:40.400673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.668  [2024-12-10 00:13:40.400687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.668  qpair failed and we were unable to recover it.
00:32:24.668  [2024-12-10 00:13:40.410664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.668  [2024-12-10 00:13:40.410736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.668  [2024-12-10 00:13:40.410749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.668  [2024-12-10 00:13:40.410755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.668  [2024-12-10 00:13:40.410761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.668  [2024-12-10 00:13:40.410776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.668  qpair failed and we were unable to recover it.
00:32:24.668  [2024-12-10 00:13:40.420660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.668  [2024-12-10 00:13:40.420715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.668  [2024-12-10 00:13:40.420731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.668  [2024-12-10 00:13:40.420738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.668  [2024-12-10 00:13:40.420743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.668  [2024-12-10 00:13:40.420758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.668  qpair failed and we were unable to recover it.
00:32:24.668  [2024-12-10 00:13:40.430675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.668  [2024-12-10 00:13:40.430731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.668  [2024-12-10 00:13:40.430744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.668  [2024-12-10 00:13:40.430751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.668  [2024-12-10 00:13:40.430756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.669  [2024-12-10 00:13:40.430770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.669  qpair failed and we were unable to recover it.
00:32:24.669  [2024-12-10 00:13:40.440734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.669  [2024-12-10 00:13:40.440787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.669  [2024-12-10 00:13:40.440800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.669  [2024-12-10 00:13:40.440806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.669  [2024-12-10 00:13:40.440812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.669  [2024-12-10 00:13:40.440825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.669  qpair failed and we were unable to recover it.
00:32:24.669  [2024-12-10 00:13:40.450666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.669  [2024-12-10 00:13:40.450722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.669  [2024-12-10 00:13:40.450734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.669  [2024-12-10 00:13:40.450741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.669  [2024-12-10 00:13:40.450746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.669  [2024-12-10 00:13:40.450761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.669  qpair failed and we were unable to recover it.
00:32:24.669  [2024-12-10 00:13:40.460765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.669  [2024-12-10 00:13:40.460821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.669  [2024-12-10 00:13:40.460834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.669  [2024-12-10 00:13:40.460844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.669  [2024-12-10 00:13:40.460849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.669  [2024-12-10 00:13:40.460863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.669  qpair failed and we were unable to recover it.
00:32:24.669  [2024-12-10 00:13:40.470783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.669  [2024-12-10 00:13:40.470838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.669  [2024-12-10 00:13:40.470851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.669  [2024-12-10 00:13:40.470857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.669  [2024-12-10 00:13:40.470863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.669  [2024-12-10 00:13:40.470877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.669  qpair failed and we were unable to recover it.
00:32:24.669  [2024-12-10 00:13:40.480822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.669  [2024-12-10 00:13:40.480876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.669  [2024-12-10 00:13:40.480889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.669  [2024-12-10 00:13:40.480895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.669  [2024-12-10 00:13:40.480901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.669  [2024-12-10 00:13:40.480916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.669  qpair failed and we were unable to recover it.
00:32:24.669  [2024-12-10 00:13:40.490855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.669  [2024-12-10 00:13:40.490909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.669  [2024-12-10 00:13:40.490922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.669  [2024-12-10 00:13:40.490928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.669  [2024-12-10 00:13:40.490934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.669  [2024-12-10 00:13:40.490948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.669  qpair failed and we were unable to recover it.
00:32:24.669  [2024-12-10 00:13:40.500923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.669  [2024-12-10 00:13:40.500976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.669  [2024-12-10 00:13:40.500989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.669  [2024-12-10 00:13:40.500996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.669  [2024-12-10 00:13:40.501002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.669  [2024-12-10 00:13:40.501019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.669  qpair failed and we were unable to recover it.
00:32:24.669  [2024-12-10 00:13:40.510913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.669  [2024-12-10 00:13:40.510961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.669  [2024-12-10 00:13:40.510974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.669  [2024-12-10 00:13:40.510980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.669  [2024-12-10 00:13:40.510986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.669  [2024-12-10 00:13:40.511000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.669  qpair failed and we were unable to recover it.
00:32:24.669  [2024-12-10 00:13:40.520954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.669  [2024-12-10 00:13:40.521004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.669  [2024-12-10 00:13:40.521017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.669  [2024-12-10 00:13:40.521024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.669  [2024-12-10 00:13:40.521031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.669  [2024-12-10 00:13:40.521046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.669  qpair failed and we were unable to recover it.
00:32:24.929  [2024-12-10 00:13:40.530989] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.929  [2024-12-10 00:13:40.531041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.929  [2024-12-10 00:13:40.531054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.929  [2024-12-10 00:13:40.531060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.929  [2024-12-10 00:13:40.531066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.929  [2024-12-10 00:13:40.531080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.929  qpair failed and we were unable to recover it.
00:32:24.929  [2024-12-10 00:13:40.541028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.929  [2024-12-10 00:13:40.541113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.929  [2024-12-10 00:13:40.541127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.929  [2024-12-10 00:13:40.541133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.929  [2024-12-10 00:13:40.541139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.929  [2024-12-10 00:13:40.541155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.929  qpair failed and we were unable to recover it.
00:32:24.929  [2024-12-10 00:13:40.551062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.929  [2024-12-10 00:13:40.551149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.929  [2024-12-10 00:13:40.551162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.929  [2024-12-10 00:13:40.551172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.929  [2024-12-10 00:13:40.551178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.929  [2024-12-10 00:13:40.551192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.929  qpair failed and we were unable to recover it.
00:32:24.929  [2024-12-10 00:13:40.561058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.929  [2024-12-10 00:13:40.561115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.929  [2024-12-10 00:13:40.561127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.929  [2024-12-10 00:13:40.561134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.929  [2024-12-10 00:13:40.561139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.929  [2024-12-10 00:13:40.561153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.929  qpair failed and we were unable to recover it.
00:32:24.929  [2024-12-10 00:13:40.571074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.929  [2024-12-10 00:13:40.571131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.929  [2024-12-10 00:13:40.571144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.929  [2024-12-10 00:13:40.571150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.929  [2024-12-10 00:13:40.571156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.929  [2024-12-10 00:13:40.571174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.929  qpair failed and we were unable to recover it.
00:32:24.929  [2024-12-10 00:13:40.581119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.929  [2024-12-10 00:13:40.581180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.929  [2024-12-10 00:13:40.581193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.929  [2024-12-10 00:13:40.581199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.929  [2024-12-10 00:13:40.581205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.929  [2024-12-10 00:13:40.581220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.929  qpair failed and we were unable to recover it.
00:32:24.929  [2024-12-10 00:13:40.591158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.929  [2024-12-10 00:13:40.591225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.929  [2024-12-10 00:13:40.591238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.929  [2024-12-10 00:13:40.591251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.929  [2024-12-10 00:13:40.591256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.929  [2024-12-10 00:13:40.591271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.929  qpair failed and we were unable to recover it.
00:32:24.929  [2024-12-10 00:13:40.601183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.929  [2024-12-10 00:13:40.601229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.929  [2024-12-10 00:13:40.601242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.929  [2024-12-10 00:13:40.601248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.929  [2024-12-10 00:13:40.601254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.929  [2024-12-10 00:13:40.601269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.929  qpair failed and we were unable to recover it.
00:32:24.929  [2024-12-10 00:13:40.611216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.929  [2024-12-10 00:13:40.611272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.929  [2024-12-10 00:13:40.611284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.930  [2024-12-10 00:13:40.611291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.930  [2024-12-10 00:13:40.611297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.930  [2024-12-10 00:13:40.611312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.930  qpair failed and we were unable to recover it.
00:32:24.930  [2024-12-10 00:13:40.621235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.930  [2024-12-10 00:13:40.621291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.930  [2024-12-10 00:13:40.621303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.930  [2024-12-10 00:13:40.621309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.930  [2024-12-10 00:13:40.621316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.930  [2024-12-10 00:13:40.621330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.930  qpair failed and we were unable to recover it.
00:32:24.930  [2024-12-10 00:13:40.631275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.930  [2024-12-10 00:13:40.631328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.930  [2024-12-10 00:13:40.631340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.930  [2024-12-10 00:13:40.631346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.930  [2024-12-10 00:13:40.631352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.930  [2024-12-10 00:13:40.631370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.930  qpair failed and we were unable to recover it.
00:32:24.930  [2024-12-10 00:13:40.641318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.930  [2024-12-10 00:13:40.641374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.930  [2024-12-10 00:13:40.641387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.930  [2024-12-10 00:13:40.641393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.930  [2024-12-10 00:13:40.641399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.930  [2024-12-10 00:13:40.641414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.930  qpair failed and we were unable to recover it.
00:32:24.930  [2024-12-10 00:13:40.651327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.930  [2024-12-10 00:13:40.651381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.930  [2024-12-10 00:13:40.651393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.930  [2024-12-10 00:13:40.651400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.930  [2024-12-10 00:13:40.651406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.930  [2024-12-10 00:13:40.651420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.930  qpair failed and we were unable to recover it.
00:32:24.930  [2024-12-10 00:13:40.661359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.930  [2024-12-10 00:13:40.661446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.930  [2024-12-10 00:13:40.661458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.930  [2024-12-10 00:13:40.661465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.930  [2024-12-10 00:13:40.661470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.930  [2024-12-10 00:13:40.661484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.930  qpair failed and we were unable to recover it.
00:32:24.930  [2024-12-10 00:13:40.671387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.930  [2024-12-10 00:13:40.671442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.930  [2024-12-10 00:13:40.671455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.930  [2024-12-10 00:13:40.671461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.930  [2024-12-10 00:13:40.671467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.930  [2024-12-10 00:13:40.671481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.930  qpair failed and we were unable to recover it.
00:32:24.930  [2024-12-10 00:13:40.681412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.930  [2024-12-10 00:13:40.681463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.930  [2024-12-10 00:13:40.681475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.930  [2024-12-10 00:13:40.681482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.930  [2024-12-10 00:13:40.681487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.930  [2024-12-10 00:13:40.681502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.930  qpair failed and we were unable to recover it.
00:32:24.930  [2024-12-10 00:13:40.691445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.930  [2024-12-10 00:13:40.691505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.930  [2024-12-10 00:13:40.691518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.930  [2024-12-10 00:13:40.691525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.930  [2024-12-10 00:13:40.691530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.930  [2024-12-10 00:13:40.691544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.930  qpair failed and we were unable to recover it.
00:32:24.930  [2024-12-10 00:13:40.701402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.930  [2024-12-10 00:13:40.701458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.930  [2024-12-10 00:13:40.701471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.930  [2024-12-10 00:13:40.701478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.930  [2024-12-10 00:13:40.701483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.930  [2024-12-10 00:13:40.701498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.930  qpair failed and we were unable to recover it.
00:32:24.930  [2024-12-10 00:13:40.711573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.930  [2024-12-10 00:13:40.711626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.930  [2024-12-10 00:13:40.711640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.930  [2024-12-10 00:13:40.711646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.930  [2024-12-10 00:13:40.711653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.931  [2024-12-10 00:13:40.711668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.931  qpair failed and we were unable to recover it.
00:32:24.931  [2024-12-10 00:13:40.721532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.931  [2024-12-10 00:13:40.721585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.931  [2024-12-10 00:13:40.721601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.931  [2024-12-10 00:13:40.721608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.931  [2024-12-10 00:13:40.721613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.931  [2024-12-10 00:13:40.721628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.931  qpair failed and we were unable to recover it.
00:32:24.931  [2024-12-10 00:13:40.731582] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.931  [2024-12-10 00:13:40.731653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.931  [2024-12-10 00:13:40.731666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.931  [2024-12-10 00:13:40.731674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.931  [2024-12-10 00:13:40.731681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.931  [2024-12-10 00:13:40.731697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.931  qpair failed and we were unable to recover it.
00:32:24.931  [2024-12-10 00:13:40.741596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.931  [2024-12-10 00:13:40.741649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.931  [2024-12-10 00:13:40.741663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.931  [2024-12-10 00:13:40.741670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.931  [2024-12-10 00:13:40.741676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.931  [2024-12-10 00:13:40.741691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.931  qpair failed and we were unable to recover it.
00:32:24.931  [2024-12-10 00:13:40.751611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.931  [2024-12-10 00:13:40.751667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.931  [2024-12-10 00:13:40.751680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.931  [2024-12-10 00:13:40.751687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.931  [2024-12-10 00:13:40.751693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.931  [2024-12-10 00:13:40.751707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.931  qpair failed and we were unable to recover it.
00:32:24.931  [2024-12-10 00:13:40.761636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.931  [2024-12-10 00:13:40.761687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.931  [2024-12-10 00:13:40.761700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.931  [2024-12-10 00:13:40.761706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.931  [2024-12-10 00:13:40.761715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.931  [2024-12-10 00:13:40.761730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.931  qpair failed and we were unable to recover it.
00:32:24.931  [2024-12-10 00:13:40.771671] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.931  [2024-12-10 00:13:40.771728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.931  [2024-12-10 00:13:40.771741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.931  [2024-12-10 00:13:40.771747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.931  [2024-12-10 00:13:40.771753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.931  [2024-12-10 00:13:40.771767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.931  qpair failed and we were unable to recover it.
00:32:24.931  [2024-12-10 00:13:40.781698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:24.931  [2024-12-10 00:13:40.781749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:24.931  [2024-12-10 00:13:40.781762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:24.931  [2024-12-10 00:13:40.781768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:24.931  [2024-12-10 00:13:40.781774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:24.931  [2024-12-10 00:13:40.781789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:24.931  qpair failed and we were unable to recover it.
00:32:25.191  [2024-12-10 00:13:40.791669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.191  [2024-12-10 00:13:40.791721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.191  [2024-12-10 00:13:40.791735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.191  [2024-12-10 00:13:40.791741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.191  [2024-12-10 00:13:40.791747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.191  [2024-12-10 00:13:40.791762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.191  qpair failed and we were unable to recover it.
00:32:25.191  [2024-12-10 00:13:40.801775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.191  [2024-12-10 00:13:40.801827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.191  [2024-12-10 00:13:40.801839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.191  [2024-12-10 00:13:40.801845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.191  [2024-12-10 00:13:40.801851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.191  [2024-12-10 00:13:40.801866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.191  qpair failed and we were unable to recover it.
00:32:25.191  [2024-12-10 00:13:40.811804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.191  [2024-12-10 00:13:40.811860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.191  [2024-12-10 00:13:40.811874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.191  [2024-12-10 00:13:40.811880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.191  [2024-12-10 00:13:40.811886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.191  [2024-12-10 00:13:40.811901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.191  qpair failed and we were unable to recover it.
00:32:25.191  [2024-12-10 00:13:40.821867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.191  [2024-12-10 00:13:40.821922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.191  [2024-12-10 00:13:40.821935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.191  [2024-12-10 00:13:40.821941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.191  [2024-12-10 00:13:40.821947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.191  [2024-12-10 00:13:40.821961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.191  qpair failed and we were unable to recover it.
00:32:25.191  [2024-12-10 00:13:40.831835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.191  [2024-12-10 00:13:40.831894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.191  [2024-12-10 00:13:40.831907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.191  [2024-12-10 00:13:40.831913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.191  [2024-12-10 00:13:40.831919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.191  [2024-12-10 00:13:40.831934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.191  qpair failed and we were unable to recover it.
00:32:25.191  [2024-12-10 00:13:40.841888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.191  [2024-12-10 00:13:40.841938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.191  [2024-12-10 00:13:40.841950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.191  [2024-12-10 00:13:40.841956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.191  [2024-12-10 00:13:40.841962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.191  [2024-12-10 00:13:40.841976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.191  qpair failed and we were unable to recover it.
00:32:25.191  [2024-12-10 00:13:40.851896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.191  [2024-12-10 00:13:40.851951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.191  [2024-12-10 00:13:40.851967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.191  [2024-12-10 00:13:40.851973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.191  [2024-12-10 00:13:40.851979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.191  [2024-12-10 00:13:40.851993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.191  qpair failed and we were unable to recover it.
00:32:25.191  [2024-12-10 00:13:40.861966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.191  [2024-12-10 00:13:40.862030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.191  [2024-12-10 00:13:40.862043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.191  [2024-12-10 00:13:40.862049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.191  [2024-12-10 00:13:40.862055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.191  [2024-12-10 00:13:40.862069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.191  qpair failed and we were unable to recover it.
00:32:25.191  [2024-12-10 00:13:40.871954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.191  [2024-12-10 00:13:40.872013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.191  [2024-12-10 00:13:40.872026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.191  [2024-12-10 00:13:40.872032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.191  [2024-12-10 00:13:40.872038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.191  [2024-12-10 00:13:40.872052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.191  qpair failed and we were unable to recover it.
00:32:25.191  [2024-12-10 00:13:40.881986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.191  [2024-12-10 00:13:40.882036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.191  [2024-12-10 00:13:40.882049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.191  [2024-12-10 00:13:40.882055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.191  [2024-12-10 00:13:40.882062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.191  [2024-12-10 00:13:40.882076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.191  qpair failed and we were unable to recover it.
00:32:25.191  [2024-12-10 00:13:40.891952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.191  [2024-12-10 00:13:40.892011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.191  [2024-12-10 00:13:40.892024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.192  [2024-12-10 00:13:40.892031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.192  [2024-12-10 00:13:40.892040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.192  [2024-12-10 00:13:40.892054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.192  qpair failed and we were unable to recover it.
00:32:25.192  [2024-12-10 00:13:40.902043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.192  [2024-12-10 00:13:40.902101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.192  [2024-12-10 00:13:40.902114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.192  [2024-12-10 00:13:40.902121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.192  [2024-12-10 00:13:40.902127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.192  [2024-12-10 00:13:40.902141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.192  qpair failed and we were unable to recover it.
00:32:25.192  [2024-12-10 00:13:40.912004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.192  [2024-12-10 00:13:40.912070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.192  [2024-12-10 00:13:40.912083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.192  [2024-12-10 00:13:40.912089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.192  [2024-12-10 00:13:40.912096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.192  [2024-12-10 00:13:40.912110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.192  qpair failed and we were unable to recover it.
00:32:25.192  [2024-12-10 00:13:40.922092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.192  [2024-12-10 00:13:40.922142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.192  [2024-12-10 00:13:40.922155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.192  [2024-12-10 00:13:40.922162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.192  [2024-12-10 00:13:40.922171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.192  [2024-12-10 00:13:40.922186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.192  qpair failed and we were unable to recover it.
00:32:25.192  [2024-12-10 00:13:40.932137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.192  [2024-12-10 00:13:40.932217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.192  [2024-12-10 00:13:40.932230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.192  [2024-12-10 00:13:40.932237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.192  [2024-12-10 00:13:40.932242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.192  [2024-12-10 00:13:40.932257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.192  qpair failed and we were unable to recover it.
00:32:25.192  [2024-12-10 00:13:40.942207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.192  [2024-12-10 00:13:40.942262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.192  [2024-12-10 00:13:40.942274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.192  [2024-12-10 00:13:40.942280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.192  [2024-12-10 00:13:40.942286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.192  [2024-12-10 00:13:40.942301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.192  qpair failed and we were unable to recover it.
00:32:25.192  [2024-12-10 00:13:40.952192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.192  [2024-12-10 00:13:40.952249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.192  [2024-12-10 00:13:40.952262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.192  [2024-12-10 00:13:40.952269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.192  [2024-12-10 00:13:40.952274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.192  [2024-12-10 00:13:40.952289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.192  qpair failed and we were unable to recover it.
00:32:25.192  [2024-12-10 00:13:40.962213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.192  [2024-12-10 00:13:40.962264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.192  [2024-12-10 00:13:40.962277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.192  [2024-12-10 00:13:40.962283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.192  [2024-12-10 00:13:40.962289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.192  [2024-12-10 00:13:40.962304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.192  qpair failed and we were unable to recover it.
00:32:25.192  [2024-12-10 00:13:40.972196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.192  [2024-12-10 00:13:40.972276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.192  [2024-12-10 00:13:40.972289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.192  [2024-12-10 00:13:40.972295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.192  [2024-12-10 00:13:40.972301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.192  [2024-12-10 00:13:40.972315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.192  qpair failed and we were unable to recover it.
00:32:25.192  [2024-12-10 00:13:40.982279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.192  [2024-12-10 00:13:40.982334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.192  [2024-12-10 00:13:40.982349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.192  [2024-12-10 00:13:40.982355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.192  [2024-12-10 00:13:40.982361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.192  [2024-12-10 00:13:40.982376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.192  qpair failed and we were unable to recover it.
00:32:25.192  [2024-12-10 00:13:40.992339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.192  [2024-12-10 00:13:40.992393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.192  [2024-12-10 00:13:40.992406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.192  [2024-12-10 00:13:40.992412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.192  [2024-12-10 00:13:40.992418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.192  [2024-12-10 00:13:40.992432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.192  qpair failed and we were unable to recover it.
00:32:25.192  [2024-12-10 00:13:41.002322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.193  [2024-12-10 00:13:41.002376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.193  [2024-12-10 00:13:41.002388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.193  [2024-12-10 00:13:41.002394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.193  [2024-12-10 00:13:41.002400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.193  [2024-12-10 00:13:41.002415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.193  qpair failed and we were unable to recover it.
00:32:25.193  [2024-12-10 00:13:41.012416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.193  [2024-12-10 00:13:41.012472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.193  [2024-12-10 00:13:41.012485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.193  [2024-12-10 00:13:41.012492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.193  [2024-12-10 00:13:41.012498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.193  [2024-12-10 00:13:41.012512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.193  qpair failed and we were unable to recover it.
00:32:25.193  [2024-12-10 00:13:41.022329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.193  [2024-12-10 00:13:41.022385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.193  [2024-12-10 00:13:41.022397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.193  [2024-12-10 00:13:41.022407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.193  [2024-12-10 00:13:41.022413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.193  [2024-12-10 00:13:41.022427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.193  qpair failed and we were unable to recover it.
00:32:25.193  [2024-12-10 00:13:41.032433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.193  [2024-12-10 00:13:41.032487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.193  [2024-12-10 00:13:41.032499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.193  [2024-12-10 00:13:41.032506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.193  [2024-12-10 00:13:41.032511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.193  [2024-12-10 00:13:41.032526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.193  qpair failed and we were unable to recover it.
00:32:25.193  [2024-12-10 00:13:41.042454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.193  [2024-12-10 00:13:41.042508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.193  [2024-12-10 00:13:41.042521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.193  [2024-12-10 00:13:41.042527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.193  [2024-12-10 00:13:41.042534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.193  [2024-12-10 00:13:41.042548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.193  qpair failed and we were unable to recover it.
00:32:25.452  [2024-12-10 00:13:41.052433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.452  [2024-12-10 00:13:41.052530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.452  [2024-12-10 00:13:41.052542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.452  [2024-12-10 00:13:41.052549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.452  [2024-12-10 00:13:41.052554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.452  [2024-12-10 00:13:41.052568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.452  qpair failed and we were unable to recover it.
00:32:25.452  [2024-12-10 00:13:41.062505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.452  [2024-12-10 00:13:41.062563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.452  [2024-12-10 00:13:41.062575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.452  [2024-12-10 00:13:41.062582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.452  [2024-12-10 00:13:41.062587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.452  [2024-12-10 00:13:41.062605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.452  qpair failed and we were unable to recover it.
00:32:25.452  [2024-12-10 00:13:41.072538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.452  [2024-12-10 00:13:41.072592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.452  [2024-12-10 00:13:41.072604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.452  [2024-12-10 00:13:41.072610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.452  [2024-12-10 00:13:41.072616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.452  [2024-12-10 00:13:41.072631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.452  qpair failed and we were unable to recover it.
00:32:25.452  [2024-12-10 00:13:41.082615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.452  [2024-12-10 00:13:41.082676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.452  [2024-12-10 00:13:41.082688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.452  [2024-12-10 00:13:41.082694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.452  [2024-12-10 00:13:41.082700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.452  [2024-12-10 00:13:41.082714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.452  qpair failed and we were unable to recover it.
00:32:25.452  [2024-12-10 00:13:41.092603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.452  [2024-12-10 00:13:41.092661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.452  [2024-12-10 00:13:41.092674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.452  [2024-12-10 00:13:41.092680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.452  [2024-12-10 00:13:41.092686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.452  [2024-12-10 00:13:41.092700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.452  qpair failed and we were unable to recover it.
00:32:25.452  [2024-12-10 00:13:41.102626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.452  [2024-12-10 00:13:41.102686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.452  [2024-12-10 00:13:41.102699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.452  [2024-12-10 00:13:41.102706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.452  [2024-12-10 00:13:41.102711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.452  [2024-12-10 00:13:41.102726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.452  qpair failed and we were unable to recover it.
00:32:25.453  [2024-12-10 00:13:41.112592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.453  [2024-12-10 00:13:41.112643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.453  [2024-12-10 00:13:41.112655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.453  [2024-12-10 00:13:41.112662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.453  [2024-12-10 00:13:41.112668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.453  [2024-12-10 00:13:41.112682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.453  qpair failed and we were unable to recover it.
00:32:25.453  [2024-12-10 00:13:41.122694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.453  [2024-12-10 00:13:41.122744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.453  [2024-12-10 00:13:41.122757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.453  [2024-12-10 00:13:41.122764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.453  [2024-12-10 00:13:41.122770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.453  [2024-12-10 00:13:41.122784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.453  qpair failed and we were unable to recover it.
00:32:25.453  [2024-12-10 00:13:41.132720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.453  [2024-12-10 00:13:41.132771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.453  [2024-12-10 00:13:41.132784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.453  [2024-12-10 00:13:41.132791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.453  [2024-12-10 00:13:41.132796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.453  [2024-12-10 00:13:41.132811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.453  qpair failed and we were unable to recover it.
00:32:25.453  [2024-12-10 00:13:41.142748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.453  [2024-12-10 00:13:41.142801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.453  [2024-12-10 00:13:41.142813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.453  [2024-12-10 00:13:41.142820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.453  [2024-12-10 00:13:41.142826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.453  [2024-12-10 00:13:41.142840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.453  qpair failed and we were unable to recover it.
00:32:25.453  [2024-12-10 00:13:41.152769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.453  [2024-12-10 00:13:41.152821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.453  [2024-12-10 00:13:41.152835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.453  [2024-12-10 00:13:41.152844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.453  [2024-12-10 00:13:41.152850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.453  [2024-12-10 00:13:41.152865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.453  qpair failed and we were unable to recover it.
00:32:25.453  [2024-12-10 00:13:41.162799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.453  [2024-12-10 00:13:41.162851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.453  [2024-12-10 00:13:41.162864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.453  [2024-12-10 00:13:41.162870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.453  [2024-12-10 00:13:41.162876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.453  [2024-12-10 00:13:41.162890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.453  qpair failed and we were unable to recover it.
00:32:25.453  [2024-12-10 00:13:41.172753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.453  [2024-12-10 00:13:41.172822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.453  [2024-12-10 00:13:41.172835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.453  [2024-12-10 00:13:41.172841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.453  [2024-12-10 00:13:41.172847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.453  [2024-12-10 00:13:41.172862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.453  qpair failed and we were unable to recover it.
00:32:25.453  [2024-12-10 00:13:41.182777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.453  [2024-12-10 00:13:41.182835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.453  [2024-12-10 00:13:41.182849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.453  [2024-12-10 00:13:41.182855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.453  [2024-12-10 00:13:41.182861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.453  [2024-12-10 00:13:41.182876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.453  qpair failed and we were unable to recover it.
00:32:25.453  [2024-12-10 00:13:41.192907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.453  [2024-12-10 00:13:41.192963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.453  [2024-12-10 00:13:41.192976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.453  [2024-12-10 00:13:41.192982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.453  [2024-12-10 00:13:41.192989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.453  [2024-12-10 00:13:41.193006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.453  qpair failed and we were unable to recover it.
00:32:25.453  [2024-12-10 00:13:41.202943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.453  [2024-12-10 00:13:41.203034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.453  [2024-12-10 00:13:41.203047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.453  [2024-12-10 00:13:41.203054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.454  [2024-12-10 00:13:41.203060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.454  [2024-12-10 00:13:41.203074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.454  qpair failed and we were unable to recover it.
00:32:25.454  [2024-12-10 00:13:41.212957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.454  [2024-12-10 00:13:41.213028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.454  [2024-12-10 00:13:41.213041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.454  [2024-12-10 00:13:41.213048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.454  [2024-12-10 00:13:41.213054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.454  [2024-12-10 00:13:41.213069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.454  qpair failed and we were unable to recover it.
00:32:25.454  [2024-12-10 00:13:41.222969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.454  [2024-12-10 00:13:41.223025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.454  [2024-12-10 00:13:41.223038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.454  [2024-12-10 00:13:41.223045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.454  [2024-12-10 00:13:41.223051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.454  [2024-12-10 00:13:41.223065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.454  qpair failed and we were unable to recover it.
00:32:25.454  [2024-12-10 00:13:41.233008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.454  [2024-12-10 00:13:41.233061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.454  [2024-12-10 00:13:41.233075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.454  [2024-12-10 00:13:41.233081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.454  [2024-12-10 00:13:41.233087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.454  [2024-12-10 00:13:41.233102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.454  qpair failed and we were unable to recover it.
00:32:25.454  [2024-12-10 00:13:41.242955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.454  [2024-12-10 00:13:41.243007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.454  [2024-12-10 00:13:41.243021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.454  [2024-12-10 00:13:41.243027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.454  [2024-12-10 00:13:41.243032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.454  [2024-12-10 00:13:41.243047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.454  qpair failed and we were unable to recover it.
00:32:25.454  [2024-12-10 00:13:41.253054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.454  [2024-12-10 00:13:41.253109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.454  [2024-12-10 00:13:41.253122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.454  [2024-12-10 00:13:41.253128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.454  [2024-12-10 00:13:41.253134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.454  [2024-12-10 00:13:41.253148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.454  qpair failed and we were unable to recover it.
00:32:25.454  [2024-12-10 00:13:41.263092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.454  [2024-12-10 00:13:41.263141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.454  [2024-12-10 00:13:41.263154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.454  [2024-12-10 00:13:41.263160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.454  [2024-12-10 00:13:41.263170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.454  [2024-12-10 00:13:41.263186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.454  qpair failed and we were unable to recover it.
00:32:25.454  [2024-12-10 00:13:41.273143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.454  [2024-12-10 00:13:41.273202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.454  [2024-12-10 00:13:41.273215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.454  [2024-12-10 00:13:41.273222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.454  [2024-12-10 00:13:41.273227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.454  [2024-12-10 00:13:41.273242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.454  qpair failed and we were unable to recover it.
00:32:25.454  [2024-12-10 00:13:41.283068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.454  [2024-12-10 00:13:41.283132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.454  [2024-12-10 00:13:41.283151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.454  [2024-12-10 00:13:41.283157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.454  [2024-12-10 00:13:41.283163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.454  [2024-12-10 00:13:41.283184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.454  qpair failed and we were unable to recover it.
00:32:25.454  [2024-12-10 00:13:41.293195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.454  [2024-12-10 00:13:41.293250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.454  [2024-12-10 00:13:41.293262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.454  [2024-12-10 00:13:41.293268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.454  [2024-12-10 00:13:41.293274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.454  [2024-12-10 00:13:41.293289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.454  qpair failed and we were unable to recover it.
00:32:25.454  [2024-12-10 00:13:41.303213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.454  [2024-12-10 00:13:41.303270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.454  [2024-12-10 00:13:41.303283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.455  [2024-12-10 00:13:41.303289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.455  [2024-12-10 00:13:41.303295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.455  [2024-12-10 00:13:41.303310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.455  qpair failed and we were unable to recover it.
00:32:25.712  [2024-12-10 00:13:41.313250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.712  [2024-12-10 00:13:41.313309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.712  [2024-12-10 00:13:41.313322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.712  [2024-12-10 00:13:41.313328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.712  [2024-12-10 00:13:41.313334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.712  [2024-12-10 00:13:41.313348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.712  qpair failed and we were unable to recover it.
00:32:25.712  [2024-12-10 00:13:41.323313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.712  [2024-12-10 00:13:41.323372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.712  [2024-12-10 00:13:41.323385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.712  [2024-12-10 00:13:41.323391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.712  [2024-12-10 00:13:41.323400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.712  [2024-12-10 00:13:41.323415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.712  qpair failed and we were unable to recover it.
00:32:25.712  [2024-12-10 00:13:41.333313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.712  [2024-12-10 00:13:41.333372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.712  [2024-12-10 00:13:41.333386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.712  [2024-12-10 00:13:41.333392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.712  [2024-12-10 00:13:41.333398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.712  [2024-12-10 00:13:41.333412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.712  qpair failed and we were unable to recover it.
00:32:25.712  [2024-12-10 00:13:41.343316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.712  [2024-12-10 00:13:41.343367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.712  [2024-12-10 00:13:41.343380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.712  [2024-12-10 00:13:41.343387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.712  [2024-12-10 00:13:41.343393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.712  [2024-12-10 00:13:41.343407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.712  qpair failed and we were unable to recover it.
00:32:25.712  [2024-12-10 00:13:41.353369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.712  [2024-12-10 00:13:41.353429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.712  [2024-12-10 00:13:41.353442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.712  [2024-12-10 00:13:41.353448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.712  [2024-12-10 00:13:41.353454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.712  [2024-12-10 00:13:41.353468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.712  qpair failed and we were unable to recover it.
00:32:25.712  [2024-12-10 00:13:41.363303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.712  [2024-12-10 00:13:41.363360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.712  [2024-12-10 00:13:41.363373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.712  [2024-12-10 00:13:41.363379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.712  [2024-12-10 00:13:41.363385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.712  [2024-12-10 00:13:41.363400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.712  qpair failed and we were unable to recover it.
00:32:25.712  [2024-12-10 00:13:41.373349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.712  [2024-12-10 00:13:41.373453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.712  [2024-12-10 00:13:41.373466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.712  [2024-12-10 00:13:41.373472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.712  [2024-12-10 00:13:41.373478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.712  [2024-12-10 00:13:41.373492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.712  qpair failed and we were unable to recover it.
00:32:25.712  [2024-12-10 00:13:41.383464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.713  [2024-12-10 00:13:41.383517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.713  [2024-12-10 00:13:41.383530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.713  [2024-12-10 00:13:41.383536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.713  [2024-12-10 00:13:41.383542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.713  [2024-12-10 00:13:41.383556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.713  qpair failed and we were unable to recover it.
00:32:25.713  [2024-12-10 00:13:41.393464] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.713  [2024-12-10 00:13:41.393514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.713  [2024-12-10 00:13:41.393527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.713  [2024-12-10 00:13:41.393533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.713  [2024-12-10 00:13:41.393539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.713  [2024-12-10 00:13:41.393553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.713  qpair failed and we were unable to recover it.
00:32:25.713  [2024-12-10 00:13:41.403499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.713  [2024-12-10 00:13:41.403553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.713  [2024-12-10 00:13:41.403566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.713  [2024-12-10 00:13:41.403573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.713  [2024-12-10 00:13:41.403578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.713  [2024-12-10 00:13:41.403593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.713  qpair failed and we were unable to recover it.
00:32:25.713  [2024-12-10 00:13:41.413522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.713  [2024-12-10 00:13:41.413626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.713  [2024-12-10 00:13:41.413641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.713  [2024-12-10 00:13:41.413648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.713  [2024-12-10 00:13:41.413653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.713  [2024-12-10 00:13:41.413668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.713  qpair failed and we were unable to recover it.
00:32:25.713  [2024-12-10 00:13:41.423540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.713  [2024-12-10 00:13:41.423593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.713  [2024-12-10 00:13:41.423605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.713  [2024-12-10 00:13:41.423612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.713  [2024-12-10 00:13:41.423618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.713  [2024-12-10 00:13:41.423632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.713  qpair failed and we were unable to recover it.
00:32:25.713  [2024-12-10 00:13:41.433487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.713  [2024-12-10 00:13:41.433541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.713  [2024-12-10 00:13:41.433554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.713  [2024-12-10 00:13:41.433561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.713  [2024-12-10 00:13:41.433567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.713  [2024-12-10 00:13:41.433581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.713  qpair failed and we were unable to recover it.
00:32:25.713  [2024-12-10 00:13:41.443591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.713  [2024-12-10 00:13:41.443644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.713  [2024-12-10 00:13:41.443656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.713  [2024-12-10 00:13:41.443663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.713  [2024-12-10 00:13:41.443668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.713  [2024-12-10 00:13:41.443683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.713  qpair failed and we were unable to recover it.
00:32:25.713  [2024-12-10 00:13:41.453629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.713  [2024-12-10 00:13:41.453686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.713  [2024-12-10 00:13:41.453699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.713  [2024-12-10 00:13:41.453705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.713  [2024-12-10 00:13:41.453714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.713  [2024-12-10 00:13:41.453729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.713  qpair failed and we were unable to recover it.
00:32:25.713  [2024-12-10 00:13:41.463608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.713  [2024-12-10 00:13:41.463663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.713  [2024-12-10 00:13:41.463676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.713  [2024-12-10 00:13:41.463682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.713  [2024-12-10 00:13:41.463688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.713  [2024-12-10 00:13:41.463702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.713  qpair failed and we were unable to recover it.
00:32:25.713  [2024-12-10 00:13:41.473691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.713  [2024-12-10 00:13:41.473754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.713  [2024-12-10 00:13:41.473767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.713  [2024-12-10 00:13:41.473773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.713  [2024-12-10 00:13:41.473779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.713  [2024-12-10 00:13:41.473793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.713  qpair failed and we were unable to recover it.
00:32:25.713  [2024-12-10 00:13:41.483722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.713  [2024-12-10 00:13:41.483777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.713  [2024-12-10 00:13:41.483792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.713  [2024-12-10 00:13:41.483799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.713  [2024-12-10 00:13:41.483804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.713  [2024-12-10 00:13:41.483818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.713  qpair failed and we were unable to recover it.
00:32:25.713  [2024-12-10 00:13:41.493744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.713  [2024-12-10 00:13:41.493798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.713  [2024-12-10 00:13:41.493810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.713  [2024-12-10 00:13:41.493816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.713  [2024-12-10 00:13:41.493822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.713  [2024-12-10 00:13:41.493837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.713  qpair failed and we were unable to recover it.
00:32:25.713  [2024-12-10 00:13:41.503776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.713  [2024-12-10 00:13:41.503829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.713  [2024-12-10 00:13:41.503843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.713  [2024-12-10 00:13:41.503850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.713  [2024-12-10 00:13:41.503856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.714  [2024-12-10 00:13:41.503870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.714  qpair failed and we were unable to recover it.
00:32:25.714  [2024-12-10 00:13:41.513822] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.714  [2024-12-10 00:13:41.513894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.714  [2024-12-10 00:13:41.513907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.714  [2024-12-10 00:13:41.513913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.714  [2024-12-10 00:13:41.513919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.714  [2024-12-10 00:13:41.513933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.714  qpair failed and we were unable to recover it.
00:32:25.714  [2024-12-10 00:13:41.523763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.714  [2024-12-10 00:13:41.523815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.714  [2024-12-10 00:13:41.523828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.714  [2024-12-10 00:13:41.523834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.714  [2024-12-10 00:13:41.523840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.714  [2024-12-10 00:13:41.523855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.714  qpair failed and we were unable to recover it.
00:32:25.714  [2024-12-10 00:13:41.533894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.714  [2024-12-10 00:13:41.533952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.714  [2024-12-10 00:13:41.533965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.714  [2024-12-10 00:13:41.533971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.714  [2024-12-10 00:13:41.533977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.714  [2024-12-10 00:13:41.533991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.714  qpair failed and we were unable to recover it.
00:32:25.714  [2024-12-10 00:13:41.543893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.714  [2024-12-10 00:13:41.543948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.714  [2024-12-10 00:13:41.543964] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.714  [2024-12-10 00:13:41.543970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.714  [2024-12-10 00:13:41.543976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.714  [2024-12-10 00:13:41.543991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.714  qpair failed and we were unable to recover it.
00:32:25.714  [2024-12-10 00:13:41.553855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.714  [2024-12-10 00:13:41.553946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.714  [2024-12-10 00:13:41.553959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.714  [2024-12-10 00:13:41.553966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.714  [2024-12-10 00:13:41.553971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.714  [2024-12-10 00:13:41.553986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.714  qpair failed and we were unable to recover it.
00:32:25.714  [2024-12-10 00:13:41.563865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.714  [2024-12-10 00:13:41.563917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.714  [2024-12-10 00:13:41.563930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.714  [2024-12-10 00:13:41.563936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.714  [2024-12-10 00:13:41.563942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.714  [2024-12-10 00:13:41.563957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.714  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.573916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.573972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.573984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.573991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.573996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.574011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.584007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.584065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.584077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.584087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.584093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.584107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.594036] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.594085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.594098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.594105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.594110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.594124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.604096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.604153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.604170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.604177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.604183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.604197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.614101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.614158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.614175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.614182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.614187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.614202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.624116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.624168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.624181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.624188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.624195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.624213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.634143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.634199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.634213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.634218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.634225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.634239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.644180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.644231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.644244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.644250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.644256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.644270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.654243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.654350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.654362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.654369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.654374] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.654388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.664341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.664408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.664421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.664427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.664433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.664447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.674301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.674355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.674368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.674375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.674381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.674394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.684398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.684494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.684507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.684514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.684520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.684534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.694371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.694428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.694441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.694448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.694454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.694468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.704403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.704470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.704483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.704489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.704495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.704509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.714350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.714447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.714460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.973  [2024-12-10 00:13:41.714469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.973  [2024-12-10 00:13:41.714475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.973  [2024-12-10 00:13:41.714490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.973  qpair failed and we were unable to recover it.
00:32:25.973  [2024-12-10 00:13:41.724412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.973  [2024-12-10 00:13:41.724467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.973  [2024-12-10 00:13:41.724480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.974  [2024-12-10 00:13:41.724487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.974  [2024-12-10 00:13:41.724492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.974  [2024-12-10 00:13:41.724507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.974  qpair failed and we were unable to recover it.
00:32:25.974  [2024-12-10 00:13:41.734445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.974  [2024-12-10 00:13:41.734499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.974  [2024-12-10 00:13:41.734514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.974  [2024-12-10 00:13:41.734521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.974  [2024-12-10 00:13:41.734527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.974  [2024-12-10 00:13:41.734541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.974  qpair failed and we were unable to recover it.
00:32:25.974  [2024-12-10 00:13:41.744465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.974  [2024-12-10 00:13:41.744521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.974  [2024-12-10 00:13:41.744534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.974  [2024-12-10 00:13:41.744540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.974  [2024-12-10 00:13:41.744546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.974  [2024-12-10 00:13:41.744560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.974  qpair failed and we were unable to recover it.
00:32:25.974  [2024-12-10 00:13:41.754491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.974  [2024-12-10 00:13:41.754541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.974  [2024-12-10 00:13:41.754553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.974  [2024-12-10 00:13:41.754560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.974  [2024-12-10 00:13:41.754566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.974  [2024-12-10 00:13:41.754583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.974  qpair failed and we were unable to recover it.
00:32:25.974  [2024-12-10 00:13:41.764515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.974  [2024-12-10 00:13:41.764573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.974  [2024-12-10 00:13:41.764585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.974  [2024-12-10 00:13:41.764592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.974  [2024-12-10 00:13:41.764597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.974  [2024-12-10 00:13:41.764612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.974  qpair failed and we were unable to recover it.
00:32:25.974  [2024-12-10 00:13:41.774547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.974  [2024-12-10 00:13:41.774604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.974  [2024-12-10 00:13:41.774617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.974  [2024-12-10 00:13:41.774624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.974  [2024-12-10 00:13:41.774630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.974  [2024-12-10 00:13:41.774644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.974  qpair failed and we were unable to recover it.
00:32:25.974  [2024-12-10 00:13:41.784572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.974  [2024-12-10 00:13:41.784627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.974  [2024-12-10 00:13:41.784640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.974  [2024-12-10 00:13:41.784647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.974  [2024-12-10 00:13:41.784652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.974  [2024-12-10 00:13:41.784667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.974  qpair failed and we were unable to recover it.
00:32:25.974  [2024-12-10 00:13:41.794602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.974  [2024-12-10 00:13:41.794659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.974  [2024-12-10 00:13:41.794671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.974  [2024-12-10 00:13:41.794678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.974  [2024-12-10 00:13:41.794684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.974  [2024-12-10 00:13:41.794698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.974  qpair failed and we were unable to recover it.
00:32:25.974  [2024-12-10 00:13:41.804641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.974  [2024-12-10 00:13:41.804690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.974  [2024-12-10 00:13:41.804704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.974  [2024-12-10 00:13:41.804710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.974  [2024-12-10 00:13:41.804716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.974  [2024-12-10 00:13:41.804731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.974  qpair failed and we were unable to recover it.
00:32:25.974  [2024-12-10 00:13:41.814699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.974  [2024-12-10 00:13:41.814755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.974  [2024-12-10 00:13:41.814768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.974  [2024-12-10 00:13:41.814775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.974  [2024-12-10 00:13:41.814781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.974  [2024-12-10 00:13:41.814797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.974  qpair failed and we were unable to recover it.
00:32:25.974  [2024-12-10 00:13:41.824693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:25.974  [2024-12-10 00:13:41.824743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:25.974  [2024-12-10 00:13:41.824756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:25.974  [2024-12-10 00:13:41.824763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:25.974  [2024-12-10 00:13:41.824768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:25.974  [2024-12-10 00:13:41.824782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:25.974  qpair failed and we were unable to recover it.
00:32:26.231  [2024-12-10 00:13:41.834724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.231  [2024-12-10 00:13:41.834778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.231  [2024-12-10 00:13:41.834793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.231  [2024-12-10 00:13:41.834799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.231  [2024-12-10 00:13:41.834805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.231  [2024-12-10 00:13:41.834820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.231  qpair failed and we were unable to recover it.
00:32:26.231  [2024-12-10 00:13:41.844746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.231  [2024-12-10 00:13:41.844803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.231  [2024-12-10 00:13:41.844820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.231  [2024-12-10 00:13:41.844830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.231  [2024-12-10 00:13:41.844838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.231  [2024-12-10 00:13:41.844854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.231  qpair failed and we were unable to recover it.
00:32:26.231  [2024-12-10 00:13:41.854779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.231  [2024-12-10 00:13:41.854834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.231  [2024-12-10 00:13:41.854847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.231  [2024-12-10 00:13:41.854853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.231  [2024-12-10 00:13:41.854859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.854873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.864808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.864866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.864878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.864884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.864890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.864905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.874828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.874879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.874892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.874899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.874904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.874919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.884864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.884917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.884930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.884936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.884945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.884959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.894903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.894960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.894973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.894979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.894985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.894999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.904931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.904987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.905001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.905008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.905014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.905028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.914944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.914996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.915009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.915016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.915021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.915036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.924952] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.925006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.925019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.925026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.925032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.925047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.935003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.935062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.935074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.935081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.935086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.935101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.945001] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.945066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.945078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.945085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.945090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.945105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.955068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.955121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.955135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.955141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.955147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.955162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.965082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.965133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.965145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.965151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.965157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.965175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.975118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.975200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.975218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.975224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.975230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.975245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.985146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.985204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.985217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.985224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.985229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.985244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:41.995163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:41.995223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:41.995236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:41.995242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:41.995248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:41.995262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:42.005265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:42.005323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:42.005336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:42.005343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:42.005349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.232  [2024-12-10 00:13:42.005363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.232  qpair failed and we were unable to recover it.
00:32:26.232  [2024-12-10 00:13:42.015216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.232  [2024-12-10 00:13:42.015274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.232  [2024-12-10 00:13:42.015287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.232  [2024-12-10 00:13:42.015294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.232  [2024-12-10 00:13:42.015303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.233  [2024-12-10 00:13:42.015318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.233  qpair failed and we were unable to recover it.
00:32:26.233  [2024-12-10 00:13:42.025195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.233  [2024-12-10 00:13:42.025257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.233  [2024-12-10 00:13:42.025270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.233  [2024-12-10 00:13:42.025276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.233  [2024-12-10 00:13:42.025282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.233  [2024-12-10 00:13:42.025296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.233  qpair failed and we were unable to recover it.
00:32:26.233  [2024-12-10 00:13:42.035276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.233  [2024-12-10 00:13:42.035331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.233  [2024-12-10 00:13:42.035343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.233  [2024-12-10 00:13:42.035350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.233  [2024-12-10 00:13:42.035356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.233  [2024-12-10 00:13:42.035370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.233  qpair failed and we were unable to recover it.
00:32:26.233  [2024-12-10 00:13:42.045333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.233  [2024-12-10 00:13:42.045391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.233  [2024-12-10 00:13:42.045403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.233  [2024-12-10 00:13:42.045409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.233  [2024-12-10 00:13:42.045415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.233  [2024-12-10 00:13:42.045429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.233  qpair failed and we were unable to recover it.
00:32:26.233  [2024-12-10 00:13:42.055357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.233  [2024-12-10 00:13:42.055412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.233  [2024-12-10 00:13:42.055424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.233  [2024-12-10 00:13:42.055431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.233  [2024-12-10 00:13:42.055437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.233  [2024-12-10 00:13:42.055451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.233  qpair failed and we were unable to recover it.
00:32:26.233  [2024-12-10 00:13:42.065370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.233  [2024-12-10 00:13:42.065422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.233  [2024-12-10 00:13:42.065435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.233  [2024-12-10 00:13:42.065441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.233  [2024-12-10 00:13:42.065447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.233  [2024-12-10 00:13:42.065461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.233  qpair failed and we were unable to recover it.
00:32:26.233  [2024-12-10 00:13:42.075408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.233  [2024-12-10 00:13:42.075495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.233  [2024-12-10 00:13:42.075508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.233  [2024-12-10 00:13:42.075514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.233  [2024-12-10 00:13:42.075520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.233  [2024-12-10 00:13:42.075534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.233  qpair failed and we were unable to recover it.
00:32:26.233  [2024-12-10 00:13:42.085453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.233  [2024-12-10 00:13:42.085512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.233  [2024-12-10 00:13:42.085525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.233  [2024-12-10 00:13:42.085531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.233  [2024-12-10 00:13:42.085537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.233  [2024-12-10 00:13:42.085551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.233  qpair failed and we were unable to recover it.
00:32:26.491  [2024-12-10 00:13:42.095432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.491  [2024-12-10 00:13:42.095490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.491  [2024-12-10 00:13:42.095504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.491  [2024-12-10 00:13:42.095510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.491  [2024-12-10 00:13:42.095516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.491  [2024-12-10 00:13:42.095530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.491  qpair failed and we were unable to recover it.
00:32:26.491  [2024-12-10 00:13:42.105487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.491  [2024-12-10 00:13:42.105545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.491  [2024-12-10 00:13:42.105559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.491  [2024-12-10 00:13:42.105565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.491  [2024-12-10 00:13:42.105570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.491  [2024-12-10 00:13:42.105584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.491  qpair failed and we were unable to recover it.
00:32:26.491  [2024-12-10 00:13:42.115438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.491  [2024-12-10 00:13:42.115500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.491  [2024-12-10 00:13:42.115513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.491  [2024-12-10 00:13:42.115520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.491  [2024-12-10 00:13:42.115525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.492  [2024-12-10 00:13:42.115539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.492  qpair failed and we were unable to recover it.
00:32:26.492  [2024-12-10 00:13:42.125495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.492  [2024-12-10 00:13:42.125552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.492  [2024-12-10 00:13:42.125565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.492  [2024-12-10 00:13:42.125571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.492  [2024-12-10 00:13:42.125577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.492  [2024-12-10 00:13:42.125591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.492  qpair failed and we were unable to recover it.
00:32:26.492  [2024-12-10 00:13:42.135572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.492  [2024-12-10 00:13:42.135625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.492  [2024-12-10 00:13:42.135637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.492  [2024-12-10 00:13:42.135643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.492  [2024-12-10 00:13:42.135649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.492  [2024-12-10 00:13:42.135664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.492  qpair failed and we were unable to recover it.
00:32:26.492  [2024-12-10 00:13:42.145609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.492  [2024-12-10 00:13:42.145683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.492  [2024-12-10 00:13:42.145696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.492  [2024-12-10 00:13:42.145706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.492  [2024-12-10 00:13:42.145712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.492  [2024-12-10 00:13:42.145726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.492  qpair failed and we were unable to recover it.
00:32:26.492  [2024-12-10 00:13:42.155650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.492  [2024-12-10 00:13:42.155711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.492  [2024-12-10 00:13:42.155724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.492  [2024-12-10 00:13:42.155731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.492  [2024-12-10 00:13:42.155736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.492  [2024-12-10 00:13:42.155750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.492  qpair failed and we were unable to recover it.
00:32:26.492  [2024-12-10 00:13:42.165699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.492  [2024-12-10 00:13:42.165753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.492  [2024-12-10 00:13:42.165766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.492  [2024-12-10 00:13:42.165772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.492  [2024-12-10 00:13:42.165778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.492  [2024-12-10 00:13:42.165792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.492  qpair failed and we were unable to recover it.
00:32:26.492  [2024-12-10 00:13:42.175691] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.492  [2024-12-10 00:13:42.175744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.492  [2024-12-10 00:13:42.175756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.492  [2024-12-10 00:13:42.175763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.492  [2024-12-10 00:13:42.175768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.492  [2024-12-10 00:13:42.175783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.492  qpair failed and we were unable to recover it.
00:32:26.492  [2024-12-10 00:13:42.185695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.492  [2024-12-10 00:13:42.185752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.492  [2024-12-10 00:13:42.185764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.492  [2024-12-10 00:13:42.185771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.492  [2024-12-10 00:13:42.185776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.492  [2024-12-10 00:13:42.185794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.492  qpair failed and we were unable to recover it.
00:32:26.492  [2024-12-10 00:13:42.195741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.492  [2024-12-10 00:13:42.195795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.492  [2024-12-10 00:13:42.195808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.492  [2024-12-10 00:13:42.195814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.492  [2024-12-10 00:13:42.195820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.492  [2024-12-10 00:13:42.195835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.492  qpair failed and we were unable to recover it.
00:32:26.492  [2024-12-10 00:13:42.205766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.492  [2024-12-10 00:13:42.205820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.492  [2024-12-10 00:13:42.205833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.492  [2024-12-10 00:13:42.205839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.492  [2024-12-10 00:13:42.205845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.492  [2024-12-10 00:13:42.205860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.492  qpair failed and we were unable to recover it.
00:32:26.492  [2024-12-10 00:13:42.215744] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.492  [2024-12-10 00:13:42.215797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.492  [2024-12-10 00:13:42.215811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.492  [2024-12-10 00:13:42.215818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.492  [2024-12-10 00:13:42.215823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.492  [2024-12-10 00:13:42.215838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.492  qpair failed and we were unable to recover it.
00:32:26.492  [2024-12-10 00:13:42.225828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.492  [2024-12-10 00:13:42.225895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.492  [2024-12-10 00:13:42.225909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.492  [2024-12-10 00:13:42.225915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.493  [2024-12-10 00:13:42.225921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.493  [2024-12-10 00:13:42.225935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.493  qpair failed and we were unable to recover it.
00:32:26.493  [2024-12-10 00:13:42.235859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.493  [2024-12-10 00:13:42.235916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.493  [2024-12-10 00:13:42.235929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.493  [2024-12-10 00:13:42.235935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.493  [2024-12-10 00:13:42.235943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.493  [2024-12-10 00:13:42.235958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.493  qpair failed and we were unable to recover it.
00:32:26.493  [2024-12-10 00:13:42.245878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.493  [2024-12-10 00:13:42.245930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.493  [2024-12-10 00:13:42.245943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.493  [2024-12-10 00:13:42.245949] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.493  [2024-12-10 00:13:42.245955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.493  [2024-12-10 00:13:42.245969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.493  qpair failed and we were unable to recover it.
00:32:26.493  [2024-12-10 00:13:42.255915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.493  [2024-12-10 00:13:42.255971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.493  [2024-12-10 00:13:42.255984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.493  [2024-12-10 00:13:42.255991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.493  [2024-12-10 00:13:42.255997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.493  [2024-12-10 00:13:42.256011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.493  qpair failed and we were unable to recover it.
00:32:26.493  [2024-12-10 00:13:42.265978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.493  [2024-12-10 00:13:42.266033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.493  [2024-12-10 00:13:42.266046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.493  [2024-12-10 00:13:42.266053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.493  [2024-12-10 00:13:42.266058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.493  [2024-12-10 00:13:42.266072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.493  qpair failed and we were unable to recover it.
00:32:26.493  [2024-12-10 00:13:42.275970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.493  [2024-12-10 00:13:42.276026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.493  [2024-12-10 00:13:42.276040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.493  [2024-12-10 00:13:42.276049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.493  [2024-12-10 00:13:42.276055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.493  [2024-12-10 00:13:42.276070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.493  qpair failed and we were unable to recover it.
00:32:26.493  [2024-12-10 00:13:42.285994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.493  [2024-12-10 00:13:42.286046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.493  [2024-12-10 00:13:42.286059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.493  [2024-12-10 00:13:42.286066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.493  [2024-12-10 00:13:42.286071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.493  [2024-12-10 00:13:42.286086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.493  qpair failed and we were unable to recover it.
00:32:26.493  [2024-12-10 00:13:42.296033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.493  [2024-12-10 00:13:42.296087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.493  [2024-12-10 00:13:42.296100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.493  [2024-12-10 00:13:42.296107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.493  [2024-12-10 00:13:42.296113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.493  [2024-12-10 00:13:42.296128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.493  qpair failed and we were unable to recover it.
00:32:26.493  [2024-12-10 00:13:42.306023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.493  [2024-12-10 00:13:42.306073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.493  [2024-12-10 00:13:42.306086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.493  [2024-12-10 00:13:42.306093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.493  [2024-12-10 00:13:42.306099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.493  [2024-12-10 00:13:42.306114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.493  qpair failed and we were unable to recover it.
00:32:26.493  [2024-12-10 00:13:42.316098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.493  [2024-12-10 00:13:42.316171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.493  [2024-12-10 00:13:42.316185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.493  [2024-12-10 00:13:42.316192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.493  [2024-12-10 00:13:42.316197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.493  [2024-12-10 00:13:42.316216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.493  qpair failed and we were unable to recover it.
00:32:26.493  [2024-12-10 00:13:42.326098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.493  [2024-12-10 00:13:42.326151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.493  [2024-12-10 00:13:42.326163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.493  [2024-12-10 00:13:42.326173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.493  [2024-12-10 00:13:42.326179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.493  [2024-12-10 00:13:42.326194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.493  qpair failed and we were unable to recover it.
00:32:26.493  [2024-12-10 00:13:42.336203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.494  [2024-12-10 00:13:42.336278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.494  [2024-12-10 00:13:42.336291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.494  [2024-12-10 00:13:42.336298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.494  [2024-12-10 00:13:42.336304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.494  [2024-12-10 00:13:42.336319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.494  qpair failed and we were unable to recover it.
00:32:26.494  [2024-12-10 00:13:42.346172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.494  [2024-12-10 00:13:42.346225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.494  [2024-12-10 00:13:42.346238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.494  [2024-12-10 00:13:42.346244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.494  [2024-12-10 00:13:42.346250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.494  [2024-12-10 00:13:42.346264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.494  qpair failed and we were unable to recover it.
00:32:26.752  [2024-12-10 00:13:42.356207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.752  [2024-12-10 00:13:42.356259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.752  [2024-12-10 00:13:42.356272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.752  [2024-12-10 00:13:42.356278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.752  [2024-12-10 00:13:42.356284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.752  [2024-12-10 00:13:42.356298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.752  qpair failed and we were unable to recover it.
00:32:26.752  [2024-12-10 00:13:42.366231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.752  [2024-12-10 00:13:42.366284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.752  [2024-12-10 00:13:42.366297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.752  [2024-12-10 00:13:42.366303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.752  [2024-12-10 00:13:42.366309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.752  [2024-12-10 00:13:42.366324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.752  qpair failed and we were unable to recover it.
00:32:26.752  [2024-12-10 00:13:42.376268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.752  [2024-12-10 00:13:42.376324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.752  [2024-12-10 00:13:42.376337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.752  [2024-12-10 00:13:42.376343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.752  [2024-12-10 00:13:42.376349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.752  [2024-12-10 00:13:42.376363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.752  qpair failed and we were unable to recover it.
00:32:26.752  [2024-12-10 00:13:42.386309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.752  [2024-12-10 00:13:42.386366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.752  [2024-12-10 00:13:42.386379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.752  [2024-12-10 00:13:42.386386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.752  [2024-12-10 00:13:42.386391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.752  [2024-12-10 00:13:42.386406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.752  qpair failed and we were unable to recover it.
00:32:26.752  [2024-12-10 00:13:42.396311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.752  [2024-12-10 00:13:42.396368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.752  [2024-12-10 00:13:42.396381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.752  [2024-12-10 00:13:42.396387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.752  [2024-12-10 00:13:42.396393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.752  [2024-12-10 00:13:42.396407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.752  qpair failed and we were unable to recover it.
00:32:26.752  [2024-12-10 00:13:42.406338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.752  [2024-12-10 00:13:42.406428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.752  [2024-12-10 00:13:42.406444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.752  [2024-12-10 00:13:42.406450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.752  [2024-12-10 00:13:42.406456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.752  [2024-12-10 00:13:42.406470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.752  qpair failed and we were unable to recover it.
00:32:26.752  [2024-12-10 00:13:42.416368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.752  [2024-12-10 00:13:42.416424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.752  [2024-12-10 00:13:42.416437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.752  [2024-12-10 00:13:42.416443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.752  [2024-12-10 00:13:42.416449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.752  [2024-12-10 00:13:42.416462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.752  qpair failed and we were unable to recover it.
00:32:26.752  [2024-12-10 00:13:42.426388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.752  [2024-12-10 00:13:42.426438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.752  [2024-12-10 00:13:42.426451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.752  [2024-12-10 00:13:42.426457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.752  [2024-12-10 00:13:42.426463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.752  [2024-12-10 00:13:42.426477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.752  qpair failed and we were unable to recover it.
00:32:26.752  [2024-12-10 00:13:42.436419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.752  [2024-12-10 00:13:42.436473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.752  [2024-12-10 00:13:42.436486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.752  [2024-12-10 00:13:42.436492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.752  [2024-12-10 00:13:42.436498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.752  [2024-12-10 00:13:42.436513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.752  qpair failed and we were unable to recover it.
00:32:26.752  [2024-12-10 00:13:42.446486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.752  [2024-12-10 00:13:42.446539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.752  [2024-12-10 00:13:42.446552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.446559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.446568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.446582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.456508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.456564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.456576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.456583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.456589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.456603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.466513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.466570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.466583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.466589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.466595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.466609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.476552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.476634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.476647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.476654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.476660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.476673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.486570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.486627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.486641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.486647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.486653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.486669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.496590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.496646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.496660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.496666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.496672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.496686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.506610] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.506670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.506683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.506689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.506695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.506709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.516636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.516729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.516742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.516748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.516754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.516767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.526669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.526721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.526734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.526740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.526746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.526760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.536702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.536757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.536773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.536779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.536784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.536799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.546728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.546781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.546794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.546800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.546806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.546820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.556769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.556823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.556836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.556842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.556848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.556863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.566780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.566834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.566847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.566854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.566860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.566874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.576810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.576869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.576882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.576888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.576897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.576911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.586874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.586929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.586942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.586948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.586955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.586970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.596888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.596941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.596954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.596960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.596966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.596981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:26.753  [2024-12-10 00:13:42.606968] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:26.753  [2024-12-10 00:13:42.607026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:26.753  [2024-12-10 00:13:42.607039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:26.753  [2024-12-10 00:13:42.607045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:26.753  [2024-12-10 00:13:42.607051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:26.753  [2024-12-10 00:13:42.607065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:26.753  qpair failed and we were unable to recover it.
00:32:27.015  [2024-12-10 00:13:42.616939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.015  [2024-12-10 00:13:42.616994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.015  [2024-12-10 00:13:42.617009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.015  [2024-12-10 00:13:42.617015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.015  [2024-12-10 00:13:42.617021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.015  [2024-12-10 00:13:42.617035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.015  qpair failed and we were unable to recover it.
00:32:27.015  [2024-12-10 00:13:42.626969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.015  [2024-12-10 00:13:42.627022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.015  [2024-12-10 00:13:42.627035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.015  [2024-12-10 00:13:42.627042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.015  [2024-12-10 00:13:42.627048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.627063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.637004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.637057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.637070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.637076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.637082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.637096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.647033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.647088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.647101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.647108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.647114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.647129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.657053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.657107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.657120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.657126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.657132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.657147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.667079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.667137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.667150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.667156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.667162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.667182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.677108] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.677156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.677174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.677181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.677187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.677201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.687155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.687212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.687225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.687231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.687237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.687252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.697183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.697238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.697252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.697258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.697264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.697279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.707120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.707182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.707196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.707206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.707212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.707226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.717214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.717267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.717280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.717287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.717293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.717307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.727279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.727331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.727343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.727350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.727356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.727370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.737300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.737402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.737415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.737421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.737427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.737443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.747321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.747376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.747389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.747395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.747401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.747420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.757318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.757403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.757416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.757422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.757428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.757442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.767372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.767442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.767456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.767462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.767468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.767483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.777414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.777511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.777524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.777530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.777536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.777550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.787471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.787525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.787537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.787544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.787550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.787564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.797474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.797532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.797545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.797551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.797557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.797572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.807492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.807548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.807561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.807567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.807573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.807587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.817588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.817673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.817686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.016  [2024-12-10 00:13:42.817692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.016  [2024-12-10 00:13:42.817698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.016  [2024-12-10 00:13:42.817712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.016  qpair failed and we were unable to recover it.
00:32:27.016  [2024-12-10 00:13:42.827474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.016  [2024-12-10 00:13:42.827531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.016  [2024-12-10 00:13:42.827544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.017  [2024-12-10 00:13:42.827550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.017  [2024-12-10 00:13:42.827556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.017  [2024-12-10 00:13:42.827570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.017  qpair failed and we were unable to recover it.
00:32:27.017  [2024-12-10 00:13:42.837592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.017  [2024-12-10 00:13:42.837653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.017  [2024-12-10 00:13:42.837669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.017  [2024-12-10 00:13:42.837676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.017  [2024-12-10 00:13:42.837681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.017  [2024-12-10 00:13:42.837696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.017  qpair failed and we were unable to recover it.
00:32:27.017  [2024-12-10 00:13:42.847531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.017  [2024-12-10 00:13:42.847586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.017  [2024-12-10 00:13:42.847599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.017  [2024-12-10 00:13:42.847606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.017  [2024-12-10 00:13:42.847612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.017  [2024-12-10 00:13:42.847626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.017  qpair failed and we were unable to recover it.
00:32:27.017  [2024-12-10 00:13:42.857562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.017  [2024-12-10 00:13:42.857619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.017  [2024-12-10 00:13:42.857633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.017  [2024-12-10 00:13:42.857641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.017  [2024-12-10 00:13:42.857647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.017  [2024-12-10 00:13:42.857662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.017  qpair failed and we were unable to recover it.
00:32:27.017  [2024-12-10 00:13:42.867602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.017  [2024-12-10 00:13:42.867661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.017  [2024-12-10 00:13:42.867674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.017  [2024-12-10 00:13:42.867680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.017  [2024-12-10 00:13:42.867686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.017  [2024-12-10 00:13:42.867701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.017  qpair failed and we were unable to recover it.
00:32:27.275  [2024-12-10 00:13:42.877619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.275  [2024-12-10 00:13:42.877673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.275  [2024-12-10 00:13:42.877687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.275  [2024-12-10 00:13:42.877693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.275  [2024-12-10 00:13:42.877699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.275  [2024-12-10 00:13:42.877718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.275  qpair failed and we were unable to recover it.
00:32:27.275  [2024-12-10 00:13:42.887708] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.275  [2024-12-10 00:13:42.887761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.275  [2024-12-10 00:13:42.887774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.275  [2024-12-10 00:13:42.887781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.275  [2024-12-10 00:13:42.887787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.275  [2024-12-10 00:13:42.887802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.275  qpair failed and we were unable to recover it.
00:32:27.275  [2024-12-10 00:13:42.897748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.276  [2024-12-10 00:13:42.897840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.276  [2024-12-10 00:13:42.897853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.276  [2024-12-10 00:13:42.897859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.276  [2024-12-10 00:13:42.897865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.276  [2024-12-10 00:13:42.897881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.276  qpair failed and we were unable to recover it.
00:32:27.276  [2024-12-10 00:13:42.907780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.276  [2024-12-10 00:13:42.907840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.276  [2024-12-10 00:13:42.907854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.276  [2024-12-10 00:13:42.907860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.276  [2024-12-10 00:13:42.907866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.276  [2024-12-10 00:13:42.907881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.276  qpair failed and we were unable to recover it.
00:32:27.276  [2024-12-10 00:13:42.917804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.276  [2024-12-10 00:13:42.917858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.276  [2024-12-10 00:13:42.917871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.276  [2024-12-10 00:13:42.917877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.276  [2024-12-10 00:13:42.917883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.276  [2024-12-10 00:13:42.917897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.276  qpair failed and we were unable to recover it.
00:32:27.276  [2024-12-10 00:13:42.927817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.276  [2024-12-10 00:13:42.927887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.276  [2024-12-10 00:13:42.927900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.276  [2024-12-10 00:13:42.927906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.276  [2024-12-10 00:13:42.927912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.276  [2024-12-10 00:13:42.927926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.276  qpair failed and we were unable to recover it.
00:32:27.276  [2024-12-10 00:13:42.937789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.276  [2024-12-10 00:13:42.937845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.276  [2024-12-10 00:13:42.937857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.276  [2024-12-10 00:13:42.937863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.276  [2024-12-10 00:13:42.937869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.276  [2024-12-10 00:13:42.937884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.276  qpair failed and we were unable to recover it.
00:32:27.276  [2024-12-10 00:13:42.947854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.276  [2024-12-10 00:13:42.947908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.276  [2024-12-10 00:13:42.947921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.276  [2024-12-10 00:13:42.947927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.276  [2024-12-10 00:13:42.947933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.276  [2024-12-10 00:13:42.947948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.276  qpair failed and we were unable to recover it.
00:32:27.276  [2024-12-10 00:13:42.957835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.276  [2024-12-10 00:13:42.957885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.276  [2024-12-10 00:13:42.957898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.276  [2024-12-10 00:13:42.957904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.276  [2024-12-10 00:13:42.957910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.276  [2024-12-10 00:13:42.957924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.276  qpair failed and we were unable to recover it.
00:32:27.276  [2024-12-10 00:13:42.967945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.276  [2024-12-10 00:13:42.968006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.276  [2024-12-10 00:13:42.968022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.276  [2024-12-10 00:13:42.968028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.276  [2024-12-10 00:13:42.968033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.276  [2024-12-10 00:13:42.968048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.276  qpair failed and we were unable to recover it.
00:32:27.276  [2024-12-10 00:13:42.977978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.276  [2024-12-10 00:13:42.978030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.276  [2024-12-10 00:13:42.978044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.276  [2024-12-10 00:13:42.978050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.276  [2024-12-10 00:13:42.978056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.276  [2024-12-10 00:13:42.978071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.276  qpair failed and we were unable to recover it.
00:32:27.276  [2024-12-10 00:13:42.988052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.276  [2024-12-10 00:13:42.988116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.276  [2024-12-10 00:13:42.988131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.276  [2024-12-10 00:13:42.988139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.276  [2024-12-10 00:13:42.988145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.277  [2024-12-10 00:13:42.988160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.277  qpair failed and we were unable to recover it.
00:32:27.277  [2024-12-10 00:13:42.998032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.277  [2024-12-10 00:13:42.998086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.277  [2024-12-10 00:13:42.998099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.277  [2024-12-10 00:13:42.998106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.277  [2024-12-10 00:13:42.998112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.277  [2024-12-10 00:13:42.998126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.277  qpair failed and we were unable to recover it.
00:32:27.277  [2024-12-10 00:13:43.008060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.277  [2024-12-10 00:13:43.008139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.277  [2024-12-10 00:13:43.008153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.277  [2024-12-10 00:13:43.008159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.277  [2024-12-10 00:13:43.008171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.277  [2024-12-10 00:13:43.008186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.277  qpair failed and we were unable to recover it.
00:32:27.277  [2024-12-10 00:13:43.018125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.277  [2024-12-10 00:13:43.018184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.277  [2024-12-10 00:13:43.018206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.277  [2024-12-10 00:13:43.018213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.277  [2024-12-10 00:13:43.018219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.277  [2024-12-10 00:13:43.018239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.277  qpair failed and we were unable to recover it.
00:32:27.277  [2024-12-10 00:13:43.028144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.277  [2024-12-10 00:13:43.028209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.277  [2024-12-10 00:13:43.028223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.277  [2024-12-10 00:13:43.028229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.277  [2024-12-10 00:13:43.028235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.277  [2024-12-10 00:13:43.028249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.277  qpair failed and we were unable to recover it.
00:32:27.277  [2024-12-10 00:13:43.038163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.277  [2024-12-10 00:13:43.038226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.277  [2024-12-10 00:13:43.038239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.277  [2024-12-10 00:13:43.038245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.277  [2024-12-10 00:13:43.038251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.277  [2024-12-10 00:13:43.038266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.277  qpair failed and we were unable to recover it.
00:32:27.277  [2024-12-10 00:13:43.048181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.277  [2024-12-10 00:13:43.048238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.277  [2024-12-10 00:13:43.048251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.277  [2024-12-10 00:13:43.048257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.277  [2024-12-10 00:13:43.048263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.277  [2024-12-10 00:13:43.048277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.277  qpair failed and we were unable to recover it.
00:32:27.277  [2024-12-10 00:13:43.058232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.277  [2024-12-10 00:13:43.058287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.277  [2024-12-10 00:13:43.058300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.277  [2024-12-10 00:13:43.058306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.277  [2024-12-10 00:13:43.058312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.277  [2024-12-10 00:13:43.058327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.277  qpair failed and we were unable to recover it.
00:32:27.277  [2024-12-10 00:13:43.068290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.277  [2024-12-10 00:13:43.068350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.277  [2024-12-10 00:13:43.068363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.277  [2024-12-10 00:13:43.068370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.277  [2024-12-10 00:13:43.068376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.277  [2024-12-10 00:13:43.068391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.277  qpair failed and we were unable to recover it.
00:32:27.277  [2024-12-10 00:13:43.078203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.277  [2024-12-10 00:13:43.078257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.277  [2024-12-10 00:13:43.078270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.277  [2024-12-10 00:13:43.078278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.277  [2024-12-10 00:13:43.078284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.277  [2024-12-10 00:13:43.078298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.277  qpair failed and we were unable to recover it.
00:32:27.277  [2024-12-10 00:13:43.088304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.277  [2024-12-10 00:13:43.088355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.277  [2024-12-10 00:13:43.088368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.277  [2024-12-10 00:13:43.088374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.277  [2024-12-10 00:13:43.088380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.277  [2024-12-10 00:13:43.088394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.277  qpair failed and we were unable to recover it.
00:32:27.277  [2024-12-10 00:13:43.098350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.277  [2024-12-10 00:13:43.098406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.277  [2024-12-10 00:13:43.098422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.278  [2024-12-10 00:13:43.098428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.278  [2024-12-10 00:13:43.098434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.278  [2024-12-10 00:13:43.098449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.278  qpair failed and we were unable to recover it.
00:32:27.278  [2024-12-10 00:13:43.108414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.278  [2024-12-10 00:13:43.108468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.278  [2024-12-10 00:13:43.108480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.278  [2024-12-10 00:13:43.108487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.278  [2024-12-10 00:13:43.108492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.278  [2024-12-10 00:13:43.108507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.278  qpair failed and we were unable to recover it.
00:32:27.278  [2024-12-10 00:13:43.118404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.278  [2024-12-10 00:13:43.118456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.278  [2024-12-10 00:13:43.118469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.278  [2024-12-10 00:13:43.118475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.278  [2024-12-10 00:13:43.118481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.278  [2024-12-10 00:13:43.118495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.278  qpair failed and we were unable to recover it.
00:32:27.278  [2024-12-10 00:13:43.128478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.278  [2024-12-10 00:13:43.128528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.278  [2024-12-10 00:13:43.128541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.278  [2024-12-10 00:13:43.128547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.278  [2024-12-10 00:13:43.128553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.278  [2024-12-10 00:13:43.128567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.278  qpair failed and we were unable to recover it.
00:32:27.537  [2024-12-10 00:13:43.138509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.537  [2024-12-10 00:13:43.138569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.537  [2024-12-10 00:13:43.138582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.537  [2024-12-10 00:13:43.138588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.537  [2024-12-10 00:13:43.138596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.537  [2024-12-10 00:13:43.138610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.537  qpair failed and we were unable to recover it.
00:32:27.537  [2024-12-10 00:13:43.148523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.537  [2024-12-10 00:13:43.148582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.537  [2024-12-10 00:13:43.148594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.537  [2024-12-10 00:13:43.148600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.537  [2024-12-10 00:13:43.148606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.537  [2024-12-10 00:13:43.148620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.537  qpair failed and we were unable to recover it.
00:32:27.537  [2024-12-10 00:13:43.158505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.537  [2024-12-10 00:13:43.158551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.537  [2024-12-10 00:13:43.158563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.537  [2024-12-10 00:13:43.158570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.537  [2024-12-10 00:13:43.158575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.537  [2024-12-10 00:13:43.158589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.537  qpair failed and we were unable to recover it.
00:32:27.537  [2024-12-10 00:13:43.168551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.537  [2024-12-10 00:13:43.168600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.537  [2024-12-10 00:13:43.168613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.537  [2024-12-10 00:13:43.168619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.537  [2024-12-10 00:13:43.168625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.538  [2024-12-10 00:13:43.168640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.538  qpair failed and we were unable to recover it.
00:32:27.538  [2024-12-10 00:13:43.178642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.538  [2024-12-10 00:13:43.178718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.538  [2024-12-10 00:13:43.178731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.538  [2024-12-10 00:13:43.178738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.538  [2024-12-10 00:13:43.178743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.538  [2024-12-10 00:13:43.178757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.538  qpair failed and we were unable to recover it.
00:32:27.538  [2024-12-10 00:13:43.188589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.538  [2024-12-10 00:13:43.188639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.538  [2024-12-10 00:13:43.188652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.538  [2024-12-10 00:13:43.188659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.538  [2024-12-10 00:13:43.188664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.538  [2024-12-10 00:13:43.188679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.538  qpair failed and we were unable to recover it.
00:32:27.538  [2024-12-10 00:13:43.198626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.538  [2024-12-10 00:13:43.198680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.538  [2024-12-10 00:13:43.198693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.538  [2024-12-10 00:13:43.198699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.538  [2024-12-10 00:13:43.198705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.538  [2024-12-10 00:13:43.198719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.538  qpair failed and we were unable to recover it.
00:32:27.538  [2024-12-10 00:13:43.208651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.538  [2024-12-10 00:13:43.208698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.538  [2024-12-10 00:13:43.208711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.538  [2024-12-10 00:13:43.208717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.538  [2024-12-10 00:13:43.208723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.538  [2024-12-10 00:13:43.208738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.538  qpair failed and we were unable to recover it.
00:32:27.538  [2024-12-10 00:13:43.218677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.538  [2024-12-10 00:13:43.218734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.538  [2024-12-10 00:13:43.218747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.538  [2024-12-10 00:13:43.218753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.538  [2024-12-10 00:13:43.218759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.538  [2024-12-10 00:13:43.218773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.538  qpair failed and we were unable to recover it.
00:32:27.538  [2024-12-10 00:13:43.228695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.538  [2024-12-10 00:13:43.228752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.538  [2024-12-10 00:13:43.228764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.538  [2024-12-10 00:13:43.228771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.538  [2024-12-10 00:13:43.228776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.538  [2024-12-10 00:13:43.228791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.538  qpair failed and we were unable to recover it.
00:32:27.538  [2024-12-10 00:13:43.238729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.538  [2024-12-10 00:13:43.238792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.538  [2024-12-10 00:13:43.238805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.538  [2024-12-10 00:13:43.238812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.538  [2024-12-10 00:13:43.238818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.538  [2024-12-10 00:13:43.238832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.538  qpair failed and we were unable to recover it.
00:32:27.538  [2024-12-10 00:13:43.248760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.538  [2024-12-10 00:13:43.248806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.538  [2024-12-10 00:13:43.248819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.538  [2024-12-10 00:13:43.248826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.538  [2024-12-10 00:13:43.248831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.538  [2024-12-10 00:13:43.248846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.538  qpair failed and we were unable to recover it.
00:32:27.538  [2024-12-10 00:13:43.258797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.538  [2024-12-10 00:13:43.258851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.538  [2024-12-10 00:13:43.258864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.538  [2024-12-10 00:13:43.258870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.538  [2024-12-10 00:13:43.258876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.538  [2024-12-10 00:13:43.258890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.538  qpair failed and we were unable to recover it.
00:32:27.538  [2024-12-10 00:13:43.268801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.538  [2024-12-10 00:13:43.268852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.538  [2024-12-10 00:13:43.268865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.538  [2024-12-10 00:13:43.268874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.538  [2024-12-10 00:13:43.268880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.538  [2024-12-10 00:13:43.268894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.538  qpair failed and we were unable to recover it.
00:32:27.539  [2024-12-10 00:13:43.278842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.539  [2024-12-10 00:13:43.278897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.539  [2024-12-10 00:13:43.278910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.539  [2024-12-10 00:13:43.278916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.539  [2024-12-10 00:13:43.278922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.539  [2024-12-10 00:13:43.278936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.539  qpair failed and we were unable to recover it.
00:32:27.539  [2024-12-10 00:13:43.288868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.539  [2024-12-10 00:13:43.288919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.539  [2024-12-10 00:13:43.288931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.539  [2024-12-10 00:13:43.288938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.539  [2024-12-10 00:13:43.288944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.539  [2024-12-10 00:13:43.288958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.539  qpair failed and we were unable to recover it.
00:32:27.539  [2024-12-10 00:13:43.298849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.539  [2024-12-10 00:13:43.298908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.539  [2024-12-10 00:13:43.298921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.539  [2024-12-10 00:13:43.298927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.539  [2024-12-10 00:13:43.298933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.539  [2024-12-10 00:13:43.298947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.539  qpair failed and we were unable to recover it.
00:32:27.539  [2024-12-10 00:13:43.308937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.539  [2024-12-10 00:13:43.308993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.539  [2024-12-10 00:13:43.309006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.539  [2024-12-10 00:13:43.309013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.539  [2024-12-10 00:13:43.309018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.539  [2024-12-10 00:13:43.309036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.539  qpair failed and we were unable to recover it.
00:32:27.539  [2024-12-10 00:13:43.318962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.539  [2024-12-10 00:13:43.319038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.539  [2024-12-10 00:13:43.319051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.539  [2024-12-10 00:13:43.319057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.539  [2024-12-10 00:13:43.319063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.539  [2024-12-10 00:13:43.319077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.539  qpair failed and we were unable to recover it.
00:32:27.539  [2024-12-10 00:13:43.328909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.539  [2024-12-10 00:13:43.328965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.539  [2024-12-10 00:13:43.328978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.539  [2024-12-10 00:13:43.328984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.539  [2024-12-10 00:13:43.328990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.539  [2024-12-10 00:13:43.329004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.539  qpair failed and we were unable to recover it.
00:32:27.539  [2024-12-10 00:13:43.339029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.539  [2024-12-10 00:13:43.339083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.539  [2024-12-10 00:13:43.339096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.539  [2024-12-10 00:13:43.339102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.539  [2024-12-10 00:13:43.339109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.539  [2024-12-10 00:13:43.339124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.539  qpair failed and we were unable to recover it.
00:32:27.539  [2024-12-10 00:13:43.349044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.539  [2024-12-10 00:13:43.349092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.539  [2024-12-10 00:13:43.349105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.539  [2024-12-10 00:13:43.349111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.539  [2024-12-10 00:13:43.349117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.539  [2024-12-10 00:13:43.349132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.539  qpair failed and we were unable to recover it.
00:32:27.539  [2024-12-10 00:13:43.358995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.539  [2024-12-10 00:13:43.359046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.539  [2024-12-10 00:13:43.359060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.539  [2024-12-10 00:13:43.359066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.539  [2024-12-10 00:13:43.359072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.539  [2024-12-10 00:13:43.359086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.539  qpair failed and we were unable to recover it.
00:32:27.539  [2024-12-10 00:13:43.369097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.539  [2024-12-10 00:13:43.369153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.539  [2024-12-10 00:13:43.369171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.539  [2024-12-10 00:13:43.369178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.539  [2024-12-10 00:13:43.369184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.539  [2024-12-10 00:13:43.369199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.540  qpair failed and we were unable to recover it.
00:32:27.540  [2024-12-10 00:13:43.379131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.540  [2024-12-10 00:13:43.379208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.540  [2024-12-10 00:13:43.379221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.540  [2024-12-10 00:13:43.379227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.540  [2024-12-10 00:13:43.379233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.540  [2024-12-10 00:13:43.379248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.540  qpair failed and we were unable to recover it.
00:32:27.540  [2024-12-10 00:13:43.389146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.540  [2024-12-10 00:13:43.389202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.540  [2024-12-10 00:13:43.389215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.540  [2024-12-10 00:13:43.389221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.540  [2024-12-10 00:13:43.389227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.540  [2024-12-10 00:13:43.389241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.540  qpair failed and we were unable to recover it.
00:32:27.799  [2024-12-10 00:13:43.399200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.799  [2024-12-10 00:13:43.399254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.799  [2024-12-10 00:13:43.399271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.799  [2024-12-10 00:13:43.399277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.799  [2024-12-10 00:13:43.399283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.799  [2024-12-10 00:13:43.399298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.799  qpair failed and we were unable to recover it.
00:32:27.799  [2024-12-10 00:13:43.409198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.799  [2024-12-10 00:13:43.409251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.799  [2024-12-10 00:13:43.409264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.799  [2024-12-10 00:13:43.409271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.799  [2024-12-10 00:13:43.409276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.799  [2024-12-10 00:13:43.409291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.799  qpair failed and we were unable to recover it.
00:32:27.799  [2024-12-10 00:13:43.419241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.799  [2024-12-10 00:13:43.419295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.799  [2024-12-10 00:13:43.419308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.799  [2024-12-10 00:13:43.419314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.799  [2024-12-10 00:13:43.419320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.799  [2024-12-10 00:13:43.419335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.799  qpair failed and we were unable to recover it.
00:32:27.799  [2024-12-10 00:13:43.429267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.799  [2024-12-10 00:13:43.429317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.799  [2024-12-10 00:13:43.429330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.799  [2024-12-10 00:13:43.429337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.799  [2024-12-10 00:13:43.429342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.799  [2024-12-10 00:13:43.429357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.799  qpair failed and we were unable to recover it.
00:32:27.799  [2024-12-10 00:13:43.439287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.799  [2024-12-10 00:13:43.439343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.799  [2024-12-10 00:13:43.439356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.799  [2024-12-10 00:13:43.439363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.799  [2024-12-10 00:13:43.439369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.799  [2024-12-10 00:13:43.439390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.799  qpair failed and we were unable to recover it.
00:32:27.799  [2024-12-10 00:13:43.449324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.799  [2024-12-10 00:13:43.449376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.800  [2024-12-10 00:13:43.449389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.800  [2024-12-10 00:13:43.449396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.800  [2024-12-10 00:13:43.449402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.800  [2024-12-10 00:13:43.449417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.800  qpair failed and we were unable to recover it.
00:32:27.800  [2024-12-10 00:13:43.459359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.800  [2024-12-10 00:13:43.459415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.800  [2024-12-10 00:13:43.459428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.800  [2024-12-10 00:13:43.459434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.800  [2024-12-10 00:13:43.459440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.800  [2024-12-10 00:13:43.459454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.800  qpair failed and we were unable to recover it.
00:32:27.800  [2024-12-10 00:13:43.469387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.800  [2024-12-10 00:13:43.469442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.800  [2024-12-10 00:13:43.469454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.800  [2024-12-10 00:13:43.469460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.800  [2024-12-10 00:13:43.469466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.800  [2024-12-10 00:13:43.469480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.800  qpair failed and we were unable to recover it.
00:32:27.800  [2024-12-10 00:13:43.479408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.800  [2024-12-10 00:13:43.479458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.800  [2024-12-10 00:13:43.479471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.800  [2024-12-10 00:13:43.479477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.800  [2024-12-10 00:13:43.479483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.800  [2024-12-10 00:13:43.479497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.800  qpair failed and we were unable to recover it.
00:32:27.800  [2024-12-10 00:13:43.489450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.800  [2024-12-10 00:13:43.489503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.800  [2024-12-10 00:13:43.489516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.800  [2024-12-10 00:13:43.489522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.800  [2024-12-10 00:13:43.489528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.800  [2024-12-10 00:13:43.489543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.800  qpair failed and we were unable to recover it.
00:32:27.800  [2024-12-10 00:13:43.499535] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.800  [2024-12-10 00:13:43.499591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.800  [2024-12-10 00:13:43.499605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.800  [2024-12-10 00:13:43.499611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.800  [2024-12-10 00:13:43.499616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.800  [2024-12-10 00:13:43.499630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.800  qpair failed and we were unable to recover it.
00:32:27.800  [2024-12-10 00:13:43.509495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.800  [2024-12-10 00:13:43.509548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.800  [2024-12-10 00:13:43.509561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.800  [2024-12-10 00:13:43.509567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.800  [2024-12-10 00:13:43.509572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.800  [2024-12-10 00:13:43.509586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.800  qpair failed and we were unable to recover it.
00:32:27.800  [2024-12-10 00:13:43.519509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.800  [2024-12-10 00:13:43.519563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.800  [2024-12-10 00:13:43.519576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.800  [2024-12-10 00:13:43.519582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.800  [2024-12-10 00:13:43.519588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.800  [2024-12-10 00:13:43.519602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.800  qpair failed and we were unable to recover it.
00:32:27.800  [2024-12-10 00:13:43.529550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.800  [2024-12-10 00:13:43.529603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.800  [2024-12-10 00:13:43.529620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.800  [2024-12-10 00:13:43.529626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.800  [2024-12-10 00:13:43.529632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.800  [2024-12-10 00:13:43.529646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.800  qpair failed and we were unable to recover it.
00:32:27.800  [2024-12-10 00:13:43.539622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.800  [2024-12-10 00:13:43.539705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.800  [2024-12-10 00:13:43.539719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.800  [2024-12-10 00:13:43.539726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.800  [2024-12-10 00:13:43.539732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.800  [2024-12-10 00:13:43.539746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.800  qpair failed and we were unable to recover it.
00:32:27.800  [2024-12-10 00:13:43.549539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.800  [2024-12-10 00:13:43.549601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.800  [2024-12-10 00:13:43.549614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.800  [2024-12-10 00:13:43.549621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.800  [2024-12-10 00:13:43.549627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.800  [2024-12-10 00:13:43.549641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.800  qpair failed and we were unable to recover it.
00:32:27.800  [2024-12-10 00:13:43.559655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.801  [2024-12-10 00:13:43.559706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.801  [2024-12-10 00:13:43.559719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.801  [2024-12-10 00:13:43.559725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.801  [2024-12-10 00:13:43.559731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.801  [2024-12-10 00:13:43.559746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.801  qpair failed and we were unable to recover it.
00:32:27.801  [2024-12-10 00:13:43.569657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.801  [2024-12-10 00:13:43.569705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.801  [2024-12-10 00:13:43.569718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.801  [2024-12-10 00:13:43.569724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.801  [2024-12-10 00:13:43.569732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.801  [2024-12-10 00:13:43.569747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.801  qpair failed and we were unable to recover it.
00:32:27.801  [2024-12-10 00:13:43.579699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.801  [2024-12-10 00:13:43.579752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.801  [2024-12-10 00:13:43.579764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.801  [2024-12-10 00:13:43.579770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.801  [2024-12-10 00:13:43.579776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.801  [2024-12-10 00:13:43.579790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.801  qpair failed and we were unable to recover it.
00:32:27.801  [2024-12-10 00:13:43.589733] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.801  [2024-12-10 00:13:43.589789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.801  [2024-12-10 00:13:43.589802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.801  [2024-12-10 00:13:43.589808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.801  [2024-12-10 00:13:43.589814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.801  [2024-12-10 00:13:43.589828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.801  qpair failed and we were unable to recover it.
00:32:27.801  [2024-12-10 00:13:43.599769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.801  [2024-12-10 00:13:43.599816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.801  [2024-12-10 00:13:43.599829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.801  [2024-12-10 00:13:43.599836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.801  [2024-12-10 00:13:43.599841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.801  [2024-12-10 00:13:43.599855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.801  qpair failed and we were unable to recover it.
00:32:27.801  [2024-12-10 00:13:43.609742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.801  [2024-12-10 00:13:43.609835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.801  [2024-12-10 00:13:43.609848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.801  [2024-12-10 00:13:43.609854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.801  [2024-12-10 00:13:43.609860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.801  [2024-12-10 00:13:43.609874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.801  qpair failed and we were unable to recover it.
00:32:27.801  [2024-12-10 00:13:43.619815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.801  [2024-12-10 00:13:43.619875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.801  [2024-12-10 00:13:43.619888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.801  [2024-12-10 00:13:43.619894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.801  [2024-12-10 00:13:43.619900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.801  [2024-12-10 00:13:43.619915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.801  qpair failed and we were unable to recover it.
00:32:27.801  [2024-12-10 00:13:43.629841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.801  [2024-12-10 00:13:43.629898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.801  [2024-12-10 00:13:43.629911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.801  [2024-12-10 00:13:43.629917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.801  [2024-12-10 00:13:43.629922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.801  [2024-12-10 00:13:43.629936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.801  qpair failed and we were unable to recover it.
00:32:27.801  [2024-12-10 00:13:43.639864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.801  [2024-12-10 00:13:43.639920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.801  [2024-12-10 00:13:43.639933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.801  [2024-12-10 00:13:43.639939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.801  [2024-12-10 00:13:43.639945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.801  [2024-12-10 00:13:43.639959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.801  qpair failed and we were unable to recover it.
00:32:27.801  [2024-12-10 00:13:43.649914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:27.801  [2024-12-10 00:13:43.649968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:27.801  [2024-12-10 00:13:43.649981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:27.801  [2024-12-10 00:13:43.649987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:27.801  [2024-12-10 00:13:43.649993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:27.801  [2024-12-10 00:13:43.650007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:27.801  qpair failed and we were unable to recover it.
00:32:28.061  [2024-12-10 00:13:43.659931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.061  [2024-12-10 00:13:43.659987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.061  [2024-12-10 00:13:43.660002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.061  [2024-12-10 00:13:43.660009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.061  [2024-12-10 00:13:43.660014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.061  [2024-12-10 00:13:43.660028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.061  qpair failed and we were unable to recover it.
00:32:28.061  [2024-12-10 00:13:43.669954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.061  [2024-12-10 00:13:43.670007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.061  [2024-12-10 00:13:43.670021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.061  [2024-12-10 00:13:43.670027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.061  [2024-12-10 00:13:43.670033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.061  [2024-12-10 00:13:43.670047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.061  qpair failed and we were unable to recover it.
00:32:28.061  [2024-12-10 00:13:43.679900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.061  [2024-12-10 00:13:43.679953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.061  [2024-12-10 00:13:43.679966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.061  [2024-12-10 00:13:43.679972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.061  [2024-12-10 00:13:43.679978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.061  [2024-12-10 00:13:43.679992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.061  qpair failed and we were unable to recover it.
00:32:28.061  [2024-12-10 00:13:43.690022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.061  [2024-12-10 00:13:43.690076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.061  [2024-12-10 00:13:43.690089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.061  [2024-12-10 00:13:43.690096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.061  [2024-12-10 00:13:43.690102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.061  [2024-12-10 00:13:43.690116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.061  qpair failed and we were unable to recover it.
00:32:28.061  [2024-12-10 00:13:43.700038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.061  [2024-12-10 00:13:43.700097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.061  [2024-12-10 00:13:43.700111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.061  [2024-12-10 00:13:43.700120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.061  [2024-12-10 00:13:43.700126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.061  [2024-12-10 00:13:43.700140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.061  qpair failed and we were unable to recover it.
00:32:28.061  [2024-12-10 00:13:43.710065] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.061  [2024-12-10 00:13:43.710122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.061  [2024-12-10 00:13:43.710135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.061  [2024-12-10 00:13:43.710142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.061  [2024-12-10 00:13:43.710148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.061  [2024-12-10 00:13:43.710162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.061  qpair failed and we were unable to recover it.
00:32:28.061  [2024-12-10 00:13:43.720025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.061  [2024-12-10 00:13:43.720085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.061  [2024-12-10 00:13:43.720099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.061  [2024-12-10 00:13:43.720106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.061  [2024-12-10 00:13:43.720112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.061  [2024-12-10 00:13:43.720127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.061  qpair failed and we were unable to recover it.
00:32:28.061  [2024-12-10 00:13:43.730128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.061  [2024-12-10 00:13:43.730179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.061  [2024-12-10 00:13:43.730193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.061  [2024-12-10 00:13:43.730199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.061  [2024-12-10 00:13:43.730205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.061  [2024-12-10 00:13:43.730219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.061  qpair failed and we were unable to recover it.
00:32:28.061  [2024-12-10 00:13:43.740174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.061  [2024-12-10 00:13:43.740232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.061  [2024-12-10 00:13:43.740245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.061  [2024-12-10 00:13:43.740251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.061  [2024-12-10 00:13:43.740257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.061  [2024-12-10 00:13:43.740272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.061  qpair failed and we were unable to recover it.
00:32:28.061  [2024-12-10 00:13:43.750201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.061  [2024-12-10 00:13:43.750264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.061  [2024-12-10 00:13:43.750277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.061  [2024-12-10 00:13:43.750283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.061  [2024-12-10 00:13:43.750288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.061  [2024-12-10 00:13:43.750303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.061  qpair failed and we were unable to recover it.
00:32:28.061  [2024-12-10 00:13:43.760209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.061  [2024-12-10 00:13:43.760263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.062  [2024-12-10 00:13:43.760275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.062  [2024-12-10 00:13:43.760282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.062  [2024-12-10 00:13:43.760287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.062  [2024-12-10 00:13:43.760302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.062  qpair failed and we were unable to recover it.
00:32:28.062  [2024-12-10 00:13:43.770169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.062  [2024-12-10 00:13:43.770223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.062  [2024-12-10 00:13:43.770235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.062  [2024-12-10 00:13:43.770241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.062  [2024-12-10 00:13:43.770247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.062  [2024-12-10 00:13:43.770261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.062  qpair failed and we were unable to recover it.
00:32:28.062  [2024-12-10 00:13:43.780274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.062  [2024-12-10 00:13:43.780329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.062  [2024-12-10 00:13:43.780341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.062  [2024-12-10 00:13:43.780348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.062  [2024-12-10 00:13:43.780353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.062  [2024-12-10 00:13:43.780368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.062  qpair failed and we were unable to recover it.
00:32:28.062  [2024-12-10 00:13:43.790238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.062  [2024-12-10 00:13:43.790343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.062  [2024-12-10 00:13:43.790355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.062  [2024-12-10 00:13:43.790362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.062  [2024-12-10 00:13:43.790367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.062  [2024-12-10 00:13:43.790381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.062  qpair failed and we were unable to recover it.
00:32:28.062  [2024-12-10 00:13:43.800337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.062  [2024-12-10 00:13:43.800395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.062  [2024-12-10 00:13:43.800408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.062  [2024-12-10 00:13:43.800414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.062  [2024-12-10 00:13:43.800419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.062  [2024-12-10 00:13:43.800434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.062  qpair failed and we were unable to recover it.
00:32:28.062  [2024-12-10 00:13:43.810387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.062  [2024-12-10 00:13:43.810439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.062  [2024-12-10 00:13:43.810452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.062  [2024-12-10 00:13:43.810458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.062  [2024-12-10 00:13:43.810464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.062  [2024-12-10 00:13:43.810477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.062  qpair failed and we were unable to recover it.
00:32:28.062  [2024-12-10 00:13:43.820398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.062  [2024-12-10 00:13:43.820453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.062  [2024-12-10 00:13:43.820466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.062  [2024-12-10 00:13:43.820472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.062  [2024-12-10 00:13:43.820478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.062  [2024-12-10 00:13:43.820492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.062  qpair failed and we were unable to recover it.
00:32:28.062  [2024-12-10 00:13:43.830414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.062  [2024-12-10 00:13:43.830517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.062  [2024-12-10 00:13:43.830530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.062  [2024-12-10 00:13:43.830539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.062  [2024-12-10 00:13:43.830544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.062  [2024-12-10 00:13:43.830560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.062  qpair failed and we were unable to recover it.
00:32:28.062  [2024-12-10 00:13:43.840377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.062  [2024-12-10 00:13:43.840444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.062  [2024-12-10 00:13:43.840457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.062  [2024-12-10 00:13:43.840463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.062  [2024-12-10 00:13:43.840469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.062  [2024-12-10 00:13:43.840483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.062  qpair failed and we were unable to recover it.
00:32:28.062  [2024-12-10 00:13:43.850497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.062  [2024-12-10 00:13:43.850562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.062  [2024-12-10 00:13:43.850576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.062  [2024-12-10 00:13:43.850582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.062  [2024-12-10 00:13:43.850587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.062  [2024-12-10 00:13:43.850602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.062  qpair failed and we were unable to recover it.
00:32:28.062  [2024-12-10 00:13:43.860500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.062  [2024-12-10 00:13:43.860566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.062  [2024-12-10 00:13:43.860579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.063  [2024-12-10 00:13:43.860585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.063  [2024-12-10 00:13:43.860591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.063  [2024-12-10 00:13:43.860605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.063  qpair failed and we were unable to recover it.
00:32:28.063  [2024-12-10 00:13:43.870584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.063  [2024-12-10 00:13:43.870690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.063  [2024-12-10 00:13:43.870703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.063  [2024-12-10 00:13:43.870710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.063  [2024-12-10 00:13:43.870716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.063  [2024-12-10 00:13:43.870733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.063  qpair failed and we were unable to recover it.
00:32:28.063  [2024-12-10 00:13:43.880584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.063  [2024-12-10 00:13:43.880641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.063  [2024-12-10 00:13:43.880654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.063  [2024-12-10 00:13:43.880661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.063  [2024-12-10 00:13:43.880666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.063  [2024-12-10 00:13:43.880681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.063  qpair failed and we were unable to recover it.
00:32:28.063  [2024-12-10 00:13:43.890526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.063  [2024-12-10 00:13:43.890579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.063  [2024-12-10 00:13:43.890592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.063  [2024-12-10 00:13:43.890598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.063  [2024-12-10 00:13:43.890604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.063  [2024-12-10 00:13:43.890618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.063  qpair failed and we were unable to recover it.
00:32:28.063  [2024-12-10 00:13:43.900618] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.063  [2024-12-10 00:13:43.900675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.063  [2024-12-10 00:13:43.900688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.063  [2024-12-10 00:13:43.900694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.063  [2024-12-10 00:13:43.900700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.063  [2024-12-10 00:13:43.900715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.063  qpair failed and we were unable to recover it.
00:32:28.063  [2024-12-10 00:13:43.910653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.063  [2024-12-10 00:13:43.910708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.063  [2024-12-10 00:13:43.910720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.063  [2024-12-10 00:13:43.910727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.063  [2024-12-10 00:13:43.910733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.063  [2024-12-10 00:13:43.910748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.063  qpair failed and we were unable to recover it.
00:32:28.321  [2024-12-10 00:13:43.920594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.321  [2024-12-10 00:13:43.920646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.321  [2024-12-10 00:13:43.920658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.321  [2024-12-10 00:13:43.920664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.321  [2024-12-10 00:13:43.920670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.322  [2024-12-10 00:13:43.920684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.322  qpair failed and we were unable to recover it.
00:32:28.322  [2024-12-10 00:13:43.930727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.322  [2024-12-10 00:13:43.930781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.322  [2024-12-10 00:13:43.930794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.322  [2024-12-10 00:13:43.930800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.322  [2024-12-10 00:13:43.930806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.322  [2024-12-10 00:13:43.930820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.322  qpair failed and we were unable to recover it.
00:32:28.322  [2024-12-10 00:13:43.940751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.322  [2024-12-10 00:13:43.940807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.322  [2024-12-10 00:13:43.940819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.322  [2024-12-10 00:13:43.940826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.322  [2024-12-10 00:13:43.940831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.322  [2024-12-10 00:13:43.940846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.322  qpair failed and we were unable to recover it.
00:32:28.322  [2024-12-10 00:13:43.950823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.322  [2024-12-10 00:13:43.950885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.322  [2024-12-10 00:13:43.950898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.322  [2024-12-10 00:13:43.950905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.322  [2024-12-10 00:13:43.950910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.322  [2024-12-10 00:13:43.950925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.322  qpair failed and we were unable to recover it.
00:32:28.322  [2024-12-10 00:13:43.960740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.322  [2024-12-10 00:13:43.960792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.322  [2024-12-10 00:13:43.960808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.322  [2024-12-10 00:13:43.960815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.322  [2024-12-10 00:13:43.960821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.322  [2024-12-10 00:13:43.960835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.322  qpair failed and we were unable to recover it.
00:32:28.322  [2024-12-10 00:13:43.970858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.322  [2024-12-10 00:13:43.970912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.322  [2024-12-10 00:13:43.970926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.322  [2024-12-10 00:13:43.970932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.322  [2024-12-10 00:13:43.970938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.322  [2024-12-10 00:13:43.970953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.322  qpair failed and we were unable to recover it.
00:32:28.322  [2024-12-10 00:13:43.980862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.322  [2024-12-10 00:13:43.980915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.322  [2024-12-10 00:13:43.980928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.322  [2024-12-10 00:13:43.980935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.322  [2024-12-10 00:13:43.980941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.322  [2024-12-10 00:13:43.980955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.322  qpair failed and we were unable to recover it.
00:32:28.322  [2024-12-10 00:13:43.990920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.322  [2024-12-10 00:13:43.991002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.322  [2024-12-10 00:13:43.991016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.322  [2024-12-10 00:13:43.991024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.322  [2024-12-10 00:13:43.991029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.322  [2024-12-10 00:13:43.991045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.322  qpair failed and we were unable to recover it.
00:32:28.322  [2024-12-10 00:13:44.000917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.322  [2024-12-10 00:13:44.000970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.322  [2024-12-10 00:13:44.000984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.322  [2024-12-10 00:13:44.000990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.322  [2024-12-10 00:13:44.000999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.322  [2024-12-10 00:13:44.001015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.322  qpair failed and we were unable to recover it.
00:32:28.322  [2024-12-10 00:13:44.010946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.322  [2024-12-10 00:13:44.011001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.322  [2024-12-10 00:13:44.011015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.322  [2024-12-10 00:13:44.011021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.322  [2024-12-10 00:13:44.011029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.322  [2024-12-10 00:13:44.011043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.322  qpair failed and we were unable to recover it.
00:32:28.322  [2024-12-10 00:13:44.021013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.322  [2024-12-10 00:13:44.021076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.322  [2024-12-10 00:13:44.021089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.322  [2024-12-10 00:13:44.021096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.322  [2024-12-10 00:13:44.021102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.322  [2024-12-10 00:13:44.021117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.322  qpair failed and we were unable to recover it.
00:32:28.322  [2024-12-10 00:13:44.031040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.322  [2024-12-10 00:13:44.031098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.322  [2024-12-10 00:13:44.031112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.323  [2024-12-10 00:13:44.031118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.323  [2024-12-10 00:13:44.031124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.323  [2024-12-10 00:13:44.031139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.323  qpair failed and we were unable to recover it.
00:32:28.323  [2024-12-10 00:13:44.041033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.323  [2024-12-10 00:13:44.041087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.323  [2024-12-10 00:13:44.041100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.323  [2024-12-10 00:13:44.041106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.323  [2024-12-10 00:13:44.041112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.323  [2024-12-10 00:13:44.041126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.323  qpair failed and we were unable to recover it.
00:32:28.323  [2024-12-10 00:13:44.051000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.323  [2024-12-10 00:13:44.051049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.323  [2024-12-10 00:13:44.051062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.323  [2024-12-10 00:13:44.051068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.323  [2024-12-10 00:13:44.051074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.323  [2024-12-10 00:13:44.051089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.323  qpair failed and we were unable to recover it.
00:32:28.323  [2024-12-10 00:13:44.061069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.323  [2024-12-10 00:13:44.061130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.323  [2024-12-10 00:13:44.061144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.323  [2024-12-10 00:13:44.061150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.323  [2024-12-10 00:13:44.061156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.323  [2024-12-10 00:13:44.061175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.323  qpair failed and we were unable to recover it.
00:32:28.323  [2024-12-10 00:13:44.071144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.323  [2024-12-10 00:13:44.071211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.323  [2024-12-10 00:13:44.071224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.323  [2024-12-10 00:13:44.071230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.323  [2024-12-10 00:13:44.071236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.323  [2024-12-10 00:13:44.071251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.323  qpair failed and we were unable to recover it.
00:32:28.323  [2024-12-10 00:13:44.081146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.323  [2024-12-10 00:13:44.081199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.323  [2024-12-10 00:13:44.081213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.323  [2024-12-10 00:13:44.081219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.323  [2024-12-10 00:13:44.081225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.323  [2024-12-10 00:13:44.081240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.323  qpair failed and we were unable to recover it.
00:32:28.323  [2024-12-10 00:13:44.091219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.323  [2024-12-10 00:13:44.091272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.323  [2024-12-10 00:13:44.091288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.323  [2024-12-10 00:13:44.091295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.323  [2024-12-10 00:13:44.091301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.323  [2024-12-10 00:13:44.091316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.323  qpair failed and we were unable to recover it.
00:32:28.323  [2024-12-10 00:13:44.101210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.323  [2024-12-10 00:13:44.101263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.323  [2024-12-10 00:13:44.101277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.323  [2024-12-10 00:13:44.101283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.323  [2024-12-10 00:13:44.101289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.323  [2024-12-10 00:13:44.101304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.323  qpair failed and we were unable to recover it.
00:32:28.323  [2024-12-10 00:13:44.111260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.323  [2024-12-10 00:13:44.111313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.323  [2024-12-10 00:13:44.111325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.323  [2024-12-10 00:13:44.111331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.323  [2024-12-10 00:13:44.111337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.323  [2024-12-10 00:13:44.111351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.323  qpair failed and we were unable to recover it.
00:32:28.323  [2024-12-10 00:13:44.121264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.323  [2024-12-10 00:13:44.121319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.323  [2024-12-10 00:13:44.121333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.323  [2024-12-10 00:13:44.121339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.323  [2024-12-10 00:13:44.121344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.323  [2024-12-10 00:13:44.121359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.323  qpair failed and we were unable to recover it.
00:32:28.323  [2024-12-10 00:13:44.131290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.323  [2024-12-10 00:13:44.131342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.323  [2024-12-10 00:13:44.131356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.323  [2024-12-10 00:13:44.131362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.323  [2024-12-10 00:13:44.131371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.323  [2024-12-10 00:13:44.131386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.323  qpair failed and we were unable to recover it.
00:32:28.323  [2024-12-10 00:13:44.141325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.323  [2024-12-10 00:13:44.141379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.324  [2024-12-10 00:13:44.141392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.324  [2024-12-10 00:13:44.141398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.324  [2024-12-10 00:13:44.141404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.324  [2024-12-10 00:13:44.141419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.324  qpair failed and we were unable to recover it.
00:32:28.324  [2024-12-10 00:13:44.151395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.324  [2024-12-10 00:13:44.151447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.324  [2024-12-10 00:13:44.151460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.324  [2024-12-10 00:13:44.151466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.324  [2024-12-10 00:13:44.151473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.324  [2024-12-10 00:13:44.151487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.324  qpair failed and we were unable to recover it.
00:32:28.324  [2024-12-10 00:13:44.161371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.324  [2024-12-10 00:13:44.161428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.324  [2024-12-10 00:13:44.161441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.324  [2024-12-10 00:13:44.161447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.324  [2024-12-10 00:13:44.161453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.324  [2024-12-10 00:13:44.161468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.324  qpair failed and we were unable to recover it.
00:32:28.324  [2024-12-10 00:13:44.171395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.324  [2024-12-10 00:13:44.171444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.324  [2024-12-10 00:13:44.171457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.324  [2024-12-10 00:13:44.171464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.324  [2024-12-10 00:13:44.171469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.324  [2024-12-10 00:13:44.171485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.324  qpair failed and we were unable to recover it.
00:32:28.582  [2024-12-10 00:13:44.181374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.582  [2024-12-10 00:13:44.181430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.582  [2024-12-10 00:13:44.181443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.582  [2024-12-10 00:13:44.181450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.582  [2024-12-10 00:13:44.181456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.582  [2024-12-10 00:13:44.181470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.582  qpair failed and we were unable to recover it.
00:32:28.582  [2024-12-10 00:13:44.191459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.582  [2024-12-10 00:13:44.191519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.582  [2024-12-10 00:13:44.191532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.582  [2024-12-10 00:13:44.191539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.582  [2024-12-10 00:13:44.191544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.582  [2024-12-10 00:13:44.191559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.582  qpair failed and we were unable to recover it.
00:32:28.582  [2024-12-10 00:13:44.201409] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.582  [2024-12-10 00:13:44.201459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.582  [2024-12-10 00:13:44.201472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.582  [2024-12-10 00:13:44.201478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.582  [2024-12-10 00:13:44.201484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.582  [2024-12-10 00:13:44.201499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.582  qpair failed and we were unable to recover it.
00:32:28.582  [2024-12-10 00:13:44.211502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.582  [2024-12-10 00:13:44.211556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.582  [2024-12-10 00:13:44.211570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.582  [2024-12-10 00:13:44.211576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.582  [2024-12-10 00:13:44.211582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.582  [2024-12-10 00:13:44.211596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.582  qpair failed and we were unable to recover it.
00:32:28.582  [2024-12-10 00:13:44.221544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.582  [2024-12-10 00:13:44.221598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.582  [2024-12-10 00:13:44.221613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.582  [2024-12-10 00:13:44.221619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.582  [2024-12-10 00:13:44.221625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.582  [2024-12-10 00:13:44.221640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.582  qpair failed and we were unable to recover it.
00:32:28.582  [2024-12-10 00:13:44.231536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.582  [2024-12-10 00:13:44.231590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.582  [2024-12-10 00:13:44.231604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.582  [2024-12-10 00:13:44.231610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.582  [2024-12-10 00:13:44.231616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.582  [2024-12-10 00:13:44.231630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.582  qpair failed and we were unable to recover it.
00:32:28.582  [2024-12-10 00:13:44.241542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.582  [2024-12-10 00:13:44.241594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.582  [2024-12-10 00:13:44.241606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.582  [2024-12-10 00:13:44.241612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.582  [2024-12-10 00:13:44.241618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.582  [2024-12-10 00:13:44.241633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.582  qpair failed and we were unable to recover it.
00:32:28.582  [2024-12-10 00:13:44.251696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.582  [2024-12-10 00:13:44.251778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.582  [2024-12-10 00:13:44.251791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.582  [2024-12-10 00:13:44.251797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.582  [2024-12-10 00:13:44.251803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.582  [2024-12-10 00:13:44.251817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.582  qpair failed and we were unable to recover it.
00:32:28.582  [2024-12-10 00:13:44.261592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.582  [2024-12-10 00:13:44.261646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.582  [2024-12-10 00:13:44.261658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.582  [2024-12-10 00:13:44.261668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.582  [2024-12-10 00:13:44.261674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.261688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.271700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.271787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.271799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.271806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.271811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.271826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.281791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.281876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.281889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.281895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.281901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.281915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.291751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.291802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.291814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.291821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.291826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.291840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.301827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.301882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.301895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.301901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.301907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.301921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.311832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.311890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.311903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.311909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.311914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.311929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.321750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.321802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.321815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.321821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.321827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.321841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.331819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.331874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.331886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.331893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.331898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.331913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.341945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.342004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.342017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.342023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.342029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.342043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.351945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.352011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.352024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.352031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.352036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.352051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.361933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.361988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.362001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.362007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.362014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.362028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.371982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.372053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.372066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.372072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.372078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.372092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.381997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.382055] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.382067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.382073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.382079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.382093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.392013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.392095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.392109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.392118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.392124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.392138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.402051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.402102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.402116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.402122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.402128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.402144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.412066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.412113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.412126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.412133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.412138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.412152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.422110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.422169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.422182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.422189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.422195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.422210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.583  [2024-12-10 00:13:44.432197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.583  [2024-12-10 00:13:44.432259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.583  [2024-12-10 00:13:44.432272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.583  [2024-12-10 00:13:44.432278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.583  [2024-12-10 00:13:44.432284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.583  [2024-12-10 00:13:44.432302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.583  qpair failed and we were unable to recover it.
00:32:28.843  [2024-12-10 00:13:44.442104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.843  [2024-12-10 00:13:44.442196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.843  [2024-12-10 00:13:44.442210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.843  [2024-12-10 00:13:44.442216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.843  [2024-12-10 00:13:44.442222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.843  [2024-12-10 00:13:44.442236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.843  qpair failed and we were unable to recover it.
00:32:28.843  [2024-12-10 00:13:44.452218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.843  [2024-12-10 00:13:44.452271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.843  [2024-12-10 00:13:44.452284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.843  [2024-12-10 00:13:44.452290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.843  [2024-12-10 00:13:44.452296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.843  [2024-12-10 00:13:44.452310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.843  qpair failed and we were unable to recover it.
00:32:28.843  [2024-12-10 00:13:44.462244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.843  [2024-12-10 00:13:44.462304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.843  [2024-12-10 00:13:44.462317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.843  [2024-12-10 00:13:44.462323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.843  [2024-12-10 00:13:44.462329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.843  [2024-12-10 00:13:44.462343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.843  qpair failed and we were unable to recover it.
00:32:28.843  [2024-12-10 00:13:44.472269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.843  [2024-12-10 00:13:44.472328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.843  [2024-12-10 00:13:44.472341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.843  [2024-12-10 00:13:44.472348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.843  [2024-12-10 00:13:44.472353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.843  [2024-12-10 00:13:44.472367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.843  qpair failed and we were unable to recover it.
00:32:28.843  [2024-12-10 00:13:44.482287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.843  [2024-12-10 00:13:44.482340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.843  [2024-12-10 00:13:44.482353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.843  [2024-12-10 00:13:44.482360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.843  [2024-12-10 00:13:44.482366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.843  [2024-12-10 00:13:44.482380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.843  qpair failed and we were unable to recover it.
00:32:28.843  [2024-12-10 00:13:44.492355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.843  [2024-12-10 00:13:44.492403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.843  [2024-12-10 00:13:44.492417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.843  [2024-12-10 00:13:44.492423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.843  [2024-12-10 00:13:44.492429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.843  [2024-12-10 00:13:44.492443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.843  qpair failed and we were unable to recover it.
00:32:28.843  [2024-12-10 00:13:44.502418] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.843  [2024-12-10 00:13:44.502519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.843  [2024-12-10 00:13:44.502532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.843  [2024-12-10 00:13:44.502538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.843  [2024-12-10 00:13:44.502544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.843  [2024-12-10 00:13:44.502558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.843  qpair failed and we were unable to recover it.
00:32:28.843  [2024-12-10 00:13:44.512379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.843  [2024-12-10 00:13:44.512432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.843  [2024-12-10 00:13:44.512445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.843  [2024-12-10 00:13:44.512451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.844  [2024-12-10 00:13:44.512457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.844  [2024-12-10 00:13:44.512471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.844  qpair failed and we were unable to recover it.
00:32:28.844  [2024-12-10 00:13:44.522451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.844  [2024-12-10 00:13:44.522545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.844  [2024-12-10 00:13:44.522561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.844  [2024-12-10 00:13:44.522567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.844  [2024-12-10 00:13:44.522573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.844  [2024-12-10 00:13:44.522587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.844  qpair failed and we were unable to recover it.
00:32:28.844  [2024-12-10 00:13:44.532494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.844  [2024-12-10 00:13:44.532550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.844  [2024-12-10 00:13:44.532563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.844  [2024-12-10 00:13:44.532569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.844  [2024-12-10 00:13:44.532575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.844  [2024-12-10 00:13:44.532589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.844  qpair failed and we were unable to recover it.
00:32:28.844  [2024-12-10 00:13:44.542492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.844  [2024-12-10 00:13:44.542550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.844  [2024-12-10 00:13:44.542563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.844  [2024-12-10 00:13:44.542569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.844  [2024-12-10 00:13:44.542575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.844  [2024-12-10 00:13:44.542589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.844  qpair failed and we were unable to recover it.
00:32:28.844  [2024-12-10 00:13:44.552474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.844  [2024-12-10 00:13:44.552532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.844  [2024-12-10 00:13:44.552544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.844  [2024-12-10 00:13:44.552550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.844  [2024-12-10 00:13:44.552556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.844  [2024-12-10 00:13:44.552570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.844  qpair failed and we were unable to recover it.
00:32:28.844  [2024-12-10 00:13:44.562523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.844  [2024-12-10 00:13:44.562577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.844  [2024-12-10 00:13:44.562590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.844  [2024-12-10 00:13:44.562596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.844  [2024-12-10 00:13:44.562605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.844  [2024-12-10 00:13:44.562620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.844  qpair failed and we were unable to recover it.
00:32:28.844  [2024-12-10 00:13:44.572575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.844  [2024-12-10 00:13:44.572624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.844  [2024-12-10 00:13:44.572636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.844  [2024-12-10 00:13:44.572643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.844  [2024-12-10 00:13:44.572648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.844  [2024-12-10 00:13:44.572663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.844  qpair failed and we were unable to recover it.
00:32:28.844  [2024-12-10 00:13:44.582587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.844  [2024-12-10 00:13:44.582644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.844  [2024-12-10 00:13:44.582657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.844  [2024-12-10 00:13:44.582663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.844  [2024-12-10 00:13:44.582669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.844  [2024-12-10 00:13:44.582683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.844  qpair failed and we were unable to recover it.
00:32:28.844  [2024-12-10 00:13:44.592611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.844  [2024-12-10 00:13:44.592667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.844  [2024-12-10 00:13:44.592680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.844  [2024-12-10 00:13:44.592687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.844  [2024-12-10 00:13:44.592693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.844  [2024-12-10 00:13:44.592707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.844  qpair failed and we were unable to recover it.
00:32:28.844  [2024-12-10 00:13:44.602636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.844  [2024-12-10 00:13:44.602687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.844  [2024-12-10 00:13:44.602700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.844  [2024-12-10 00:13:44.602706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.844  [2024-12-10 00:13:44.602712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.844  [2024-12-10 00:13:44.602726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.844  qpair failed and we were unable to recover it.
00:32:28.844  [2024-12-10 00:13:44.612693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.844  [2024-12-10 00:13:44.612745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.844  [2024-12-10 00:13:44.612758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.844  [2024-12-10 00:13:44.612764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.844  [2024-12-10 00:13:44.612770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.844  [2024-12-10 00:13:44.612785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.844  qpair failed and we were unable to recover it.
00:32:28.844  [2024-12-10 00:13:44.622746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.844  [2024-12-10 00:13:44.622807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.845  [2024-12-10 00:13:44.622820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.845  [2024-12-10 00:13:44.622827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.845  [2024-12-10 00:13:44.622833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.845  [2024-12-10 00:13:44.622847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.845  qpair failed and we were unable to recover it.
00:32:28.845  [2024-12-10 00:13:44.632722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.845  [2024-12-10 00:13:44.632778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.845  [2024-12-10 00:13:44.632791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.845  [2024-12-10 00:13:44.632797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.845  [2024-12-10 00:13:44.632803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.845  [2024-12-10 00:13:44.632818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.845  qpair failed and we were unable to recover it.
00:32:28.845  [2024-12-10 00:13:44.642743] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.845  [2024-12-10 00:13:44.642796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.845  [2024-12-10 00:13:44.642809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.845  [2024-12-10 00:13:44.642815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.845  [2024-12-10 00:13:44.642821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.845  [2024-12-10 00:13:44.642835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.845  qpair failed and we were unable to recover it.
00:32:28.845  [2024-12-10 00:13:44.652833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.845  [2024-12-10 00:13:44.652890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.845  [2024-12-10 00:13:44.652906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.845  [2024-12-10 00:13:44.652913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.845  [2024-12-10 00:13:44.652918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.845  [2024-12-10 00:13:44.652933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.845  qpair failed and we were unable to recover it.
00:32:28.845  [2024-12-10 00:13:44.662818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.845  [2024-12-10 00:13:44.662871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.845  [2024-12-10 00:13:44.662884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.845  [2024-12-10 00:13:44.662890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.845  [2024-12-10 00:13:44.662896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.845  [2024-12-10 00:13:44.662910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.845  qpair failed and we were unable to recover it.
00:32:28.845  [2024-12-10 00:13:44.672843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.845  [2024-12-10 00:13:44.672899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.845  [2024-12-10 00:13:44.672912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.845  [2024-12-10 00:13:44.672918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.845  [2024-12-10 00:13:44.672924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.845  [2024-12-10 00:13:44.672938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.845  qpair failed and we were unable to recover it.
00:32:28.845  [2024-12-10 00:13:44.682898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.845  [2024-12-10 00:13:44.682955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.845  [2024-12-10 00:13:44.682968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.845  [2024-12-10 00:13:44.682975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.845  [2024-12-10 00:13:44.682980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.845  [2024-12-10 00:13:44.682995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.845  qpair failed and we were unable to recover it.
00:32:28.845  [2024-12-10 00:13:44.692913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:28.845  [2024-12-10 00:13:44.692977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:28.845  [2024-12-10 00:13:44.692990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:28.845  [2024-12-10 00:13:44.692996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:28.845  [2024-12-10 00:13:44.693005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:28.845  [2024-12-10 00:13:44.693020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:28.845  qpair failed and we were unable to recover it.
00:32:29.103  [2024-12-10 00:13:44.702926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:29.104  [2024-12-10 00:13:44.702985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:29.104  [2024-12-10 00:13:44.702998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:29.104  [2024-12-10 00:13:44.703005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:29.104  [2024-12-10 00:13:44.703010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb6c000b90
00:32:29.104  [2024-12-10 00:13:44.703024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2
00:32:29.104  qpair failed and we were unable to recover it.
00:32:29.104  [2024-12-10 00:13:44.712955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:29.104  [2024-12-10 00:13:44.713092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:29.104  [2024-12-10 00:13:44.713139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:29.104  [2024-12-10 00:13:44.713160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:29.104  [2024-12-10 00:13:44.713188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb68000b90
00:32:29.104  [2024-12-10 00:13:44.713232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:32:29.104  qpair failed and we were unable to recover it.
00:32:29.104  [2024-12-10 00:13:44.723000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:29.104  [2024-12-10 00:13:44.723104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:29.104  [2024-12-10 00:13:44.723127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:29.104  [2024-12-10 00:13:44.723139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:29.104  [2024-12-10 00:13:44.723150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb68000b90
00:32:29.104  [2024-12-10 00:13:44.723183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4
00:32:29.104  qpair failed and we were unable to recover it.
00:32:29.104  [2024-12-10 00:13:44.733046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:29.104  [2024-12-10 00:13:44.733128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:29.104  [2024-12-10 00:13:44.733183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:29.104  [2024-12-10 00:13:44.733207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:29.104  [2024-12-10 00:13:44.733224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb74000b90
00:32:29.104  [2024-12-10 00:13:44.733265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:32:29.104  qpair failed and we were unable to recover it.
00:32:29.104  [2024-12-10 00:13:44.743049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1
00:32:29.104  [2024-12-10 00:13:44.743132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1
00:32:29.104  [2024-12-10 00:13:44.743187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130
00:32:29.104  [2024-12-10 00:13:44.743200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:32:29.104  [2024-12-10 00:13:44.743211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcb74000b90
00:32:29.104  [2024-12-10 00:13:44.743238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1
00:32:29.104  qpair failed and we were unable to recover it.
00:32:29.104  [2024-12-10 00:13:44.743341] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed
00:32:29.104  A controller has encountered a failure and is being reset.
00:32:29.104  [2024-12-10 00:13:44.743452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb570f0 (9): Bad file descriptor
00:32:29.104  Controller properly reset.
00:32:29.104  Initializing NVMe Controllers
00:32:29.104  Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:32:29.104  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:32:29.104  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0
00:32:29.104  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1
00:32:29.104  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2
00:32:29.104  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3
00:32:29.104  Initialization complete. Launching workers.
00:32:29.104  Starting thread on core 1
00:32:29.104  Starting thread on core 2
00:32:29.104  Starting thread on core 3
00:32:29.104  Starting thread on core 0
00:32:29.104   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync
00:32:29.104  
00:32:29.104  real	0m11.445s
00:32:29.104  user	0m21.931s
00:32:29.104  sys	0m4.943s
00:32:29.104   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:29.104   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x
00:32:29.104  ************************************
00:32:29.104  END TEST nvmf_target_disconnect_tc2
00:32:29.104  ************************************
00:32:29.104   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']'
00:32:29.104   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT
00:32:29.104   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini
00:32:29.104   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup
00:32:29.104   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync
00:32:29.104   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:32:29.104   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e
00:32:29.104   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20}
00:32:29.104   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:32:29.104  rmmod nvme_tcp
00:32:29.104  rmmod nvme_fabrics
00:32:29.104  rmmod nvme_keyring
00:32:29.364   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:32:29.364   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e
00:32:29.364   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0
00:32:29.364   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3250957 ']'
00:32:29.364   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3250957
00:32:29.364   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3250957 ']'
00:32:29.364   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3250957
00:32:29.364    00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname
00:32:29.364   00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:29.364    00:13:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3250957
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']'
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3250957'
00:32:29.364  killing process with pid 3250957
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3250957
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3250957
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:29.364   00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:29.365    00:13:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:31.899   00:13:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:32:31.899  
00:32:31.899  real	0m20.187s
00:32:31.899  user	0m49.697s
00:32:31.899  sys	0m9.871s
00:32:31.899   00:13:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:31.899   00:13:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x
00:32:31.899  ************************************
00:32:31.900  END TEST nvmf_target_disconnect
00:32:31.900  ************************************
00:32:31.900   00:13:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT
00:32:31.900  
00:32:31.900  real	5m54.099s
00:32:31.900  user	10m41.439s
00:32:31.900  sys	1m57.870s
00:32:31.900   00:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:31.900   00:13:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x
00:32:31.900  ************************************
00:32:31.900  END TEST nvmf_host
00:32:31.900  ************************************
00:32:31.900   00:13:47 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]]
00:32:31.900   00:13:47 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]]
00:32:31.900   00:13:47 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode
00:32:31.900   00:13:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:32:31.900   00:13:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:31.900   00:13:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:32:31.900  ************************************
00:32:31.900  START TEST nvmf_target_core_interrupt_mode
00:32:31.900  ************************************
00:32:31.900   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode
00:32:31.900  * Looking for test storage...
00:32:31.900  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:32:31.900     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version
00:32:31.900     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-:
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-:
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<'
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 ))
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:32:31.900     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1
00:32:31.900     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1
00:32:31.900     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:32:31.900     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1
00:32:31.900     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2
00:32:31.900     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2
00:32:31.900     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:32:31.900     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:32:31.900  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:31.900  		--rc genhtml_branch_coverage=1
00:32:31.900  		--rc genhtml_function_coverage=1
00:32:31.900  		--rc genhtml_legend=1
00:32:31.900  		--rc geninfo_all_blocks=1
00:32:31.900  		--rc geninfo_unexecuted_blocks=1
00:32:31.900  		
00:32:31.900  		'
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:32:31.900  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:31.900  		--rc genhtml_branch_coverage=1
00:32:31.900  		--rc genhtml_function_coverage=1
00:32:31.900  		--rc genhtml_legend=1
00:32:31.900  		--rc geninfo_all_blocks=1
00:32:31.900  		--rc geninfo_unexecuted_blocks=1
00:32:31.900  		
00:32:31.900  		'
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:32:31.900  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:31.900  		--rc genhtml_branch_coverage=1
00:32:31.900  		--rc genhtml_function_coverage=1
00:32:31.900  		--rc genhtml_legend=1
00:32:31.900  		--rc geninfo_all_blocks=1
00:32:31.900  		--rc geninfo_unexecuted_blocks=1
00:32:31.900  		
00:32:31.900  		'
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:32:31.900  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:31.900  		--rc genhtml_branch_coverage=1
00:32:31.900  		--rc genhtml_function_coverage=1
00:32:31.900  		--rc genhtml_legend=1
00:32:31.900  		--rc geninfo_all_blocks=1
00:32:31.900  		--rc geninfo_unexecuted_blocks=1
00:32:31.900  		
00:32:31.900  		'
00:32:31.900    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s
00:32:31.900   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']'
00:32:31.900   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:32:31.900     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:32:31.901     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:32:31.901     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob
00:32:31.901     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:32:31.901     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:32:31.901     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:32:31.901      00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:31.901      00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:31.901      00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:31.901      00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH
00:32:31.901      00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0
00:32:31.901   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT
00:32:31.901   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@")
00:32:31.901   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]]
00:32:31.901   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode
00:32:31.901   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:32:31.901   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:31.901   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:32:31.901  ************************************
00:32:31.901  START TEST nvmf_abort
00:32:31.901  ************************************
00:32:31.901   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode
00:32:31.901  * Looking for test storage...
00:32:31.901  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:32:31.901    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:32:31.901     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version
00:32:31.901     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-:
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-:
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<'
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 ))
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:32:32.161     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1
00:32:32.161     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1
00:32:32.161     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:32:32.161     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1
00:32:32.161     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2
00:32:32.161     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2
00:32:32.161     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:32:32.161     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:32:32.161  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:32.161  		--rc genhtml_branch_coverage=1
00:32:32.161  		--rc genhtml_function_coverage=1
00:32:32.161  		--rc genhtml_legend=1
00:32:32.161  		--rc geninfo_all_blocks=1
00:32:32.161  		--rc geninfo_unexecuted_blocks=1
00:32:32.161  		
00:32:32.161  		'
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:32:32.161  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:32.161  		--rc genhtml_branch_coverage=1
00:32:32.161  		--rc genhtml_function_coverage=1
00:32:32.161  		--rc genhtml_legend=1
00:32:32.161  		--rc geninfo_all_blocks=1
00:32:32.161  		--rc geninfo_unexecuted_blocks=1
00:32:32.161  		
00:32:32.161  		'
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:32:32.161  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:32.161  		--rc genhtml_branch_coverage=1
00:32:32.161  		--rc genhtml_function_coverage=1
00:32:32.161  		--rc genhtml_legend=1
00:32:32.161  		--rc geninfo_all_blocks=1
00:32:32.161  		--rc geninfo_unexecuted_blocks=1
00:32:32.161  		
00:32:32.161  		'
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:32:32.161  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:32.161  		--rc genhtml_branch_coverage=1
00:32:32.161  		--rc genhtml_function_coverage=1
00:32:32.161  		--rc genhtml_legend=1
00:32:32.161  		--rc geninfo_all_blocks=1
00:32:32.161  		--rc geninfo_unexecuted_blocks=1
00:32:32.161  		
00:32:32.161  		'
00:32:32.161   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:32:32.161     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:32:32.161     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:32:32.161    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:32:32.162     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob
00:32:32.162     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:32:32.162     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:32:32.162     00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:32:32.162      00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:32.162      00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:32.162      00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:32.162      00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH
00:32:32.162      00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:32.162    00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable
00:32:32.162   00:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=()
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=()
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=()
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=()
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=()
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=()
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=()
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:32:38.739  Found 0000:af:00.0 (0x8086 - 0x159b)
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:32:38.739  Found 0000:af:00.1 (0x8086 - 0x159b)
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:32:38.739  Found net devices under 0000:af:00.0: cvl_0_0
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]]
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:32:38.739   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:32:38.740  Found net devices under 0000:af:00.1: cvl_0_1
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:32:38.740  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:32:38.740  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms
00:32:38.740  
00:32:38.740  --- 10.0.0.2 ping statistics ---
00:32:38.740  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:38.740  rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:32:38.740  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:32:38.740  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms
00:32:38.740  
00:32:38.740  --- 10.0.0.1 ping statistics ---
00:32:38.740  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:38.740  rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3255577
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3255577
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3255577 ']'
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:38.740  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:38.740   00:13:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:32:38.740  [2024-12-10 00:13:53.867404] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:32:38.740  [2024-12-10 00:13:53.868316] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:32:38.740  [2024-12-10 00:13:53.868352] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:32:38.740  [2024-12-10 00:13:53.946628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:32:38.740  [2024-12-10 00:13:53.986354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:32:38.740  [2024-12-10 00:13:53.986390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:32:38.740  [2024-12-10 00:13:53.986397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:32:38.740  [2024-12-10 00:13:53.986403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:32:38.740  [2024-12-10 00:13:53.986408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:32:38.740  [2024-12-10 00:13:53.987660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:32:38.740  [2024-12-10 00:13:53.987763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:38.740  [2024-12-10 00:13:53.987764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:32:38.740  [2024-12-10 00:13:54.054998] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:32:38.740  [2024-12-10 00:13:54.055827] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:32:38.740  [2024-12-10 00:13:54.056185] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:32:38.740  [2024-12-10 00:13:54.056298] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:32:38.740   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:38.740   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0
00:32:38.740   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:32:38.740   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:38.740   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:32:38.741  [2024-12-10 00:13:54.124640] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:32:38.741  Malloc0
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:32:38.741  Delay0
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:32:38.741  [2024-12-10 00:13:54.216483] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:38.741   00:13:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128
00:32:38.741  [2024-12-10 00:13:54.301258] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:32:40.645  Initializing NVMe Controllers
00:32:40.645  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0
00:32:40.645  controller IO queue size 128 less than required
00:32:40.645  Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver.
00:32:40.645  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0
00:32:40.645  Initialization complete. Launching workers.
00:32:40.645  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38119
00:32:40.645  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38176, failed to submit 66
00:32:40.645  	 success 38119, unsuccessful 57, failed 0
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20}
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:32:40.645  rmmod nvme_tcp
00:32:40.645  rmmod nvme_fabrics
00:32:40.645  rmmod nvme_keyring
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3255577 ']'
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3255577
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3255577 ']'
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3255577
00:32:40.645    00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname
00:32:40.645   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:32:40.645    00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3255577
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3255577'
00:32:40.905  killing process with pid 3255577
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3255577
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3255577
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:40.905   00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:40.905    00:13:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:43.440   00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:32:43.440  
00:32:43.440  real	0m11.121s
00:32:43.440  user	0m10.196s
00:32:43.440  sys	0m5.609s
00:32:43.440   00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable
00:32:43.440   00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x
00:32:43.440  ************************************
00:32:43.440  END TEST nvmf_abort
00:32:43.440  ************************************
00:32:43.440   00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode
00:32:43.440   00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:32:43.440   00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:32:43.440   00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:32:43.440  ************************************
00:32:43.440  START TEST nvmf_ns_hotplug_stress
00:32:43.440  ************************************
00:32:43.440   00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode
00:32:43.440  * Looking for test storage...
00:32:43.440  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:32:43.440    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:32:43.440     00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version
00:32:43.440     00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:32:43.440    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:32:43.440    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-:
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-:
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<'
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 ))
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:32:43.441     00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1
00:32:43.441     00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1
00:32:43.441     00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:32:43.441     00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1
00:32:43.441     00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2
00:32:43.441     00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2
00:32:43.441     00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:32:43.441     00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:32:43.441  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:43.441  		--rc genhtml_branch_coverage=1
00:32:43.441  		--rc genhtml_function_coverage=1
00:32:43.441  		--rc genhtml_legend=1
00:32:43.441  		--rc geninfo_all_blocks=1
00:32:43.441  		--rc geninfo_unexecuted_blocks=1
00:32:43.441  		
00:32:43.441  		'
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:32:43.441  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:43.441  		--rc genhtml_branch_coverage=1
00:32:43.441  		--rc genhtml_function_coverage=1
00:32:43.441  		--rc genhtml_legend=1
00:32:43.441  		--rc geninfo_all_blocks=1
00:32:43.441  		--rc geninfo_unexecuted_blocks=1
00:32:43.441  		
00:32:43.441  		'
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:32:43.441  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:43.441  		--rc genhtml_branch_coverage=1
00:32:43.441  		--rc genhtml_function_coverage=1
00:32:43.441  		--rc genhtml_legend=1
00:32:43.441  		--rc geninfo_all_blocks=1
00:32:43.441  		--rc geninfo_unexecuted_blocks=1
00:32:43.441  		
00:32:43.441  		'
00:32:43.441    00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:32:43.441  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:32:43.441  		--rc genhtml_branch_coverage=1
00:32:43.441  		--rc genhtml_function_coverage=1
00:32:43.441  		--rc genhtml_legend=1
00:32:43.441  		--rc geninfo_all_blocks=1
00:32:43.441  		--rc geninfo_unexecuted_blocks=1
00:32:43.441  		
00:32:43.441  		'
00:32:43.441   00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:32:43.441     00:13:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:32:43.441     00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:32:43.441    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:32:43.441     00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob
00:32:43.441     00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:32:43.441     00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:32:43.441     00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:32:43.441      00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:43.441      00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:43.442      00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:43.442      00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH
00:32:43.442      00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:32:43.442    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0
00:32:43.442    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:32:43.442    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:32:43.442    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:32:43.442    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:32:43.442    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:32:43.442    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:32:43.442    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:32:43.442    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:32:43.442    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:32:43.442    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0
00:32:43.442   00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:32:43.442   00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit
00:32:43.442   00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:32:43.442   00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:32:43.442   00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs
00:32:43.442   00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no
00:32:43.442   00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns
00:32:43.442   00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:32:43.442   00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:32:43.442    00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:32:43.442   00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:32:43.442   00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:32:43.442   00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable
00:32:43.442   00:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=()
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=()
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=()
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=()
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=()
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=()
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=()
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:32:50.009   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:32:50.010  Found 0000:af:00.0 (0x8086 - 0x159b)
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:32:50.010  Found 0000:af:00.1 (0x8086 - 0x159b)
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:32:50.010  Found net devices under 0000:af:00.0: cvl_0_0
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:32:50.010  Found net devices under 0000:af:00.1: cvl_0_1
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:32:50.010  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:32:50.010  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms
00:32:50.010  
00:32:50.010  --- 10.0.0.2 ping statistics ---
00:32:50.010  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:50.010  rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms
00:32:50.010   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:32:50.010  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:32:50.011  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms
00:32:50.011  
00:32:50.011  --- 10.0.0.1 ping statistics ---
00:32:50.011  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:32:50.011  rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3259501
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3259501
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3259501 ']'
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:32:50.011  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable
00:32:50.011   00:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:32:50.011  [2024-12-10 00:14:04.990699] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:32:50.011  [2024-12-10 00:14:04.991584] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:32:50.011  [2024-12-10 00:14:04.991616] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:32:50.011  [2024-12-10 00:14:05.069043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:32:50.011  [2024-12-10 00:14:05.109766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:32:50.011  [2024-12-10 00:14:05.109805] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:32:50.011  [2024-12-10 00:14:05.109813] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:32:50.011  [2024-12-10 00:14:05.109819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:32:50.011  [2024-12-10 00:14:05.109824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:32:50.011  [2024-12-10 00:14:05.111138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:32:50.011  [2024-12-10 00:14:05.111244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:32:50.011  [2024-12-10 00:14:05.111244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:32:50.011  [2024-12-10 00:14:05.179165] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:32:50.011  [2024-12-10 00:14:05.180106] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:32:50.011  [2024-12-10 00:14:05.180269] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:32:50.011  [2024-12-10 00:14:05.180431] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:32:50.011   00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:32:50.011   00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0
00:32:50.011   00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:32:50.011   00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable
00:32:50.011   00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:32:50.270   00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:32:50.270   00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000
00:32:50.270   00:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:32:50.270  [2024-12-10 00:14:06.032064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:32:50.270   00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:32:50.529   00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:32:50.788  [2024-12-10 00:14:06.428499] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:32:50.788   00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:32:51.047   00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0
00:32:51.047  Malloc0
00:32:51.047   00:14:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:32:51.313  Delay0
00:32:51.313   00:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:51.572   00:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512
00:32:51.831  NULL1
00:32:51.831   00:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1
00:32:51.831   00:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000
00:32:51.831   00:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3259976
00:32:51.831   00:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:32:51.831   00:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:52.090   00:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:52.348   00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001
00:32:52.348   00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001
00:32:52.607  true
00:32:52.607   00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:32:52.607   00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:52.607   00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:52.867   00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002
00:32:52.867   00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002
00:32:53.127  true
00:32:53.127   00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:32:53.127   00:14:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:53.384  Read completed with error (sct=0, sc=11)
00:32:53.384   00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:53.384  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:53.384  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:53.384  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:53.384  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:53.384  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:53.662  [2024-12-10 00:14:09.246555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.246628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.246667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.246707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.246746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.246794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.246841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.246881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.246916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.246954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.246994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.247963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.248990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.249986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.250024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.250064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.250101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.663  [2024-12-10 00:14:09.250142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.250974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.251013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.251060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.251094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.251128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.251171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.251212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.251255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.251293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.251333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.251374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.251414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.251455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.251949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.252998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.664  [2024-12-10 00:14:09.253794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.253832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.253874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.253919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.253961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.254973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.255017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.255071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.255115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.255172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.255217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.255265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.255313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.255356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.255399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.255443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.255484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.255541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.256964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.665  [2024-12-10 00:14:09.257976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.258998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.259963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.260745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  Message suppressed 999 times: Read completed with error (sct=0, sc=15)
00:32:53.666  [2024-12-10 00:14:09.261322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.666  [2024-12-10 00:14:09.261885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.261924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.261965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.262973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.263955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.264977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.265597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.265645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.265684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.265722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.265762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.265805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.265844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.265881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.265916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.265957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.265985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.266026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.266064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.667  [2024-12-10 00:14:09.266099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.266981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.267960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.268972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.269015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.269068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.269119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.269163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.269211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.269255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.269302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.269346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.668  [2024-12-10 00:14:09.269397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.269445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.269493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.269541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.269590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.269634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.269680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.269733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.269776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.269822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.269873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.269916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.269960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.270677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.271969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.272973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.273007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.273044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.273082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.273121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.273159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.273205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.273245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.669  [2024-12-10 00:14:09.273290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.273331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.273366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.273410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.273460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.273502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.273548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.273598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.273642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.273688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.273738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.273787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.273836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.274699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.275976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.276968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.670  [2024-12-10 00:14:09.277829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.277874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.277915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.277959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.278963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.279971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.280018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.280063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.280107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.280164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.280214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.280261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.280317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.280364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.280406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.280438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.280989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.671  [2024-12-10 00:14:09.281951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.281994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.282957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.283984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.284693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.285993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.286029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.286072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.286116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.286157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.672  [2024-12-10 00:14:09.286204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673   00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003
00:32:53.673  [2024-12-10 00:14:09.286285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673   00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003
00:32:53.673  [2024-12-10 00:14:09.286660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.286978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.287878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.288982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.289021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.289061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.289102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.289141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.289187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.289233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.673  [2024-12-10 00:14:09.289279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.289969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.290013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.290060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.290857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.290907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.290951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.291989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.292999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.293045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.293088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.293128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.293172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.293211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.293258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.293301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.293341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.293383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.293420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.293457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.674  [2024-12-10 00:14:09.293490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.293529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.293570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.293755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.293795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.293833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.293874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.293913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.293956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.293992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.294966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.295987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.296027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.296073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.296122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.296165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.296217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.296266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.296313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.296361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.296409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.296915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.296968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.675  [2024-12-10 00:14:09.297671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.297717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.297771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.297818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.297862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.297909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.297956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.298993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.299794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.300584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.300631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.300679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.300724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.300770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.300806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.300851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.300897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.300941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.300979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.676  [2024-12-10 00:14:09.301908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.301950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.301998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.302997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.303991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.304025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.304067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.304107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.304531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.304582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.304630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.304676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.304720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.304780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.304822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.304869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.304920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.304966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.305974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.306023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.306069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.306118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.306163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.677  [2024-12-10 00:14:09.306211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.306958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.307000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.307042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.307082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.307127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.307178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.307220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.307258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.307298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.307341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.307376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.308967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.309964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.310002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.310039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.310078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.310117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.310161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.310209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.310246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.310286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.310321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.310364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.310409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.678  [2024-12-10 00:14:09.310458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.310502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.310550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.310590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.310628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.310664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.310703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.310742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.310787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.310823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.310864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  Message suppressed 999 times: Read completed with error (sct=0, sc=15)
00:32:53.679  [2024-12-10 00:14:09.311462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.311991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.312038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.312082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.312134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.312182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.312229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.312287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.312330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.312374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.312432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.312476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.312520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.312974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.313988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.314028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.314066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.314104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.314151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.314196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.679  [2024-12-10 00:14:09.314229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.314995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.315986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.316976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.317737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.317787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.317834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.317880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.317927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.317978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.318025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.318070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.318121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.318173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.318216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.318263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.318320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.318368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.318410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.318453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.318506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.680  [2024-12-10 00:14:09.318551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.318596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.318645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.318688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.318728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.318767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.318805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.318845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.318888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.318928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.318968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.319975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.320984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.321972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.322013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.322052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.322523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.322568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.681  [2024-12-10 00:14:09.322611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.322647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.322688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.322729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.322762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.322801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.322837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.322876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.322916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.322963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.323954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.324991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.325973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.326014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.326052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.326092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.326131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.326178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.326222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.326254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.326289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.682  [2024-12-10 00:14:09.326328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.326366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.326407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.326448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.326491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.326530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.326572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.326613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.326652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.326692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.327972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.328988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.329997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.330056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.330102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.330147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.330206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.330254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.330441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.330485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.330525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.330568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.330609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.330651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.330688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.683  [2024-12-10 00:14:09.330731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.330774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.330819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.330855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.330885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.330925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.330961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.331996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.332974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.333019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.333063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.333110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.333611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.333663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.333707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.333737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.333780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.333818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.333860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.333898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.333941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.333979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.334024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.334071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.334114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.334154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.684  [2024-12-10 00:14:09.334204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.334996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.335984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.336021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.336061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.336100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.336133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.336179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.336217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.336259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.336302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.337960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.685  [2024-12-10 00:14:09.338587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.338628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.338666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.338704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.338744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.338781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.338821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.338861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.338908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.338951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.338993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.339831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.340979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.341020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.341073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.341123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.341175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.341228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.341270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.341316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.341370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.341416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.341462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.341917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.341966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.686  [2024-12-10 00:14:09.342581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.342621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.342663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.342702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.342743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.342782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.342819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.342855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.342886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.342924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.342965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.343977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.344963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.345997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.346037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.346839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.346888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.346927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.687  [2024-12-10 00:14:09.346955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.346995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.347993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.348966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.349994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.688  [2024-12-10 00:14:09.350753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.350798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.350849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.350897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.350946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.350996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.351997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.352585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.353975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.689  [2024-12-10 00:14:09.354629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.354676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.354724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.354769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.354813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.354862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.354908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.354953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.354998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.355744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.356542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.356589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.356645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.356691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.356733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.356784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.356827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.356872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.356918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.356962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.357963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.358007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.358049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.358091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.358131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.358176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.358218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.358255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.358291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.358328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.358365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.358410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.690  [2024-12-10 00:14:09.358447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.358486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.358524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.358563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.358603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.358645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.358689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.358732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.358768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.358801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.358844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.358889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.358932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.358975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.359957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.360768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.361993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.362038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.362083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.362133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.362183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.362231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.362277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.362336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.362377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.362421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.691  [2024-12-10 00:14:09.362470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.362517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.362560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.362613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.362667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.362714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.362757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.362801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.362855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.362900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.362942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.362991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.363993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  Message suppressed 999 times: Read completed with error (sct=0, sc=15)
00:32:53.692  [2024-12-10 00:14:09.364458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.364985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.365581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.366358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.366401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.366440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.366487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.366529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.366579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.366625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.366670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.366716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.366770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.366815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.366860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.692  [2024-12-10 00:14:09.366891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.366934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.366978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.367958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.368961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.369975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.693  [2024-12-10 00:14:09.370638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.370690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.370732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.370781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.370832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.370886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.370936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.370986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.371988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.372998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.373990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.694  [2024-12-10 00:14:09.374589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.374628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.374669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.374709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.374750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.374788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.374831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.374866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.374912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.374954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.375001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.375045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.375096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.375141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.375191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.375238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.375750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.375802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.375844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.375883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.375922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.375973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.376961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.377969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.378016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.378061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.378111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.378155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.378204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.378255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.378322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.695  [2024-12-10 00:14:09.378370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.378413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.378465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.378505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.378548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.378600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.378648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.379977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.380987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.381669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.382485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.382539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.382588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.382637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.382685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.382729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.382775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.382817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.382863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.382904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.696  [2024-12-10 00:14:09.382944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.382989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.383982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.384976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.385992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.697  [2024-12-10 00:14:09.386758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.386789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.386829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.386870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.386917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.386960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.387976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.388020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.388049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.388087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.388126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.388156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.388942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.388988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.389997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.698  [2024-12-10 00:14:09.390913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.390964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.391993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.392988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.393992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.394561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.699  [2024-12-10 00:14:09.395369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.395987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.396981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.397985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.398994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.399025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.399067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.399108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.700  [2024-12-10 00:14:09.399151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.399995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.400952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.401006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.401833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.401886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.401931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.401976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.701  [2024-12-10 00:14:09.402986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.403979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.404999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.405998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.702  [2024-12-10 00:14:09.406738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.406778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.406817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.406857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.406898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.406940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.406980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.407017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.407063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.407102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.407142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.407187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.407227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.407264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.407303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.407343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.407381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.407430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.407473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.408989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.409966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.410989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.703  [2024-12-10 00:14:09.411030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.411971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.412733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  Message suppressed 999 times: Read completed with error (sct=0, sc=15)
00:32:53.704  [2024-12-10 00:14:09.413493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.413971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.414964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.415012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.415060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.415108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.415152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.704  [2024-12-10 00:14:09.415200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.415953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.416977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.417012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.417049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.417094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.417134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.417180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.417221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.417262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.417303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.417346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.418986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.705  [2024-12-10 00:14:09.419573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.419613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.419645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.419693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.419725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.419768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.419815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.419867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.419916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.419961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.420943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421660] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.421995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.422974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.423014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.423055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.706  [2024-12-10 00:14:09.423100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.423814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.424989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.425968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.426980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.427025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.427071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.427121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.707  [2024-12-10 00:14:09.427173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.427675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.427724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.427769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.427823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.427864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.427906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.427946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.427986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.428975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.429987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.430034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.430079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.430124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.430177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.430227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.430275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.430321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.430366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.430413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.430899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.430945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.430987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.431028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.431074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.431105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.431152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.431197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.431245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.431289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.431331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.431373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.431415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.431454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.431494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.708  [2024-12-10 00:14:09.431536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.431575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.431616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.431656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.431695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.431736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.431775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.431819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.431859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.431900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.431932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.431974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.432953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.433620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.434430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.434480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.434528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.434583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.434628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.434675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.434720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.434766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.434821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.434869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.434914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.434960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.435979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.436022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.436062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.436104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.436145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.709  [2024-12-10 00:14:09.436193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.436974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.437986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.438028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.438075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.438482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.438532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.438584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.438628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.438675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.438724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.438774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.438822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.438868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.438916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.438962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.439989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.440028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.440063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.440107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.440144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.440191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.440235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.440275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.440313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.440356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.440400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.440442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.710  [2024-12-10 00:14:09.440484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.440523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.440570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.440614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.440661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.440703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.440750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.440800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.440850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.440896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.440942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.440990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.441041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.441096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.441145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.441192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.441240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.441287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.441870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.441918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.441964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.442997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.443992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.444031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.444066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.444108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.444157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.444206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.444253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.444300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.444357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.444406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.444455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.444500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.711  [2024-12-10 00:14:09.444551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.444596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.444643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.444839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.444885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.444932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.444978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.445996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.446989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.447991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.448836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.449014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.449057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.449103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.449142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.449185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.449225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.449268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.449300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.449342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.712  [2024-12-10 00:14:09.449383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.449980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.450944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.451542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.451600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.451641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.451686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.451732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.451775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.451822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.451859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.451896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.451936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.451974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.452974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.453976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.713  [2024-12-10 00:14:09.454022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.454969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.455009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.455049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.455088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.455122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.455163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.455205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.455815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.455861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.455904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.455946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.455985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.456988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.457968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.714  [2024-12-10 00:14:09.458980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.459962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.460728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.461511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.461555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.461593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.461628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.461667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.461705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.461752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.461798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.461835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.461874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.461913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.461952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.461991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  true
00:32:53.715  [2024-12-10 00:14:09.462517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.462982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.463021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.715  [2024-12-10 00:14:09.463065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.463982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  Message suppressed 999 times: Read completed with error (sct=0, sc=15)
00:32:53.716  [2024-12-10 00:14:09.464802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.464954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.465966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.466958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.467004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.467052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.467101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.467608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.467655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.467705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.467752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.467798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.467845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.467897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.467937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.467982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.468032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.468076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.468120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.468170] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.468217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.468268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.468312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.468343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.716  [2024-12-10 00:14:09.468386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.468983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.469973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.470017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.470057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.470087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.470128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.470180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.470225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.471996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.472967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.473969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.474027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.474078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.474125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.474184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.474239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.474285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.717  [2024-12-10 00:14:09.474329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.474988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.475993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.476703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.477522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.477574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.477621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.477666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.477709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.477756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.477804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.477850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.477897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.477946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.477991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.478967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479639] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.479980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.480018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.480053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.480090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.480129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.480175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.480215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.718  [2024-12-10 00:14:09.480256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.480441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.480490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.480538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.480591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.480640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.480686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.480733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.480778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.480823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.480867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.480912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.480961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.481952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.482976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.483974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.484983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719   00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:32:53.719  [2024-12-10 00:14:09.485748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.485983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.486024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719  [2024-12-10 00:14:09.486066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.719   00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:53.720  [2024-12-10 00:14:09.486111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.486156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.486204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.486242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.486284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.486322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.486357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.486395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.486441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.486485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.487972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.488993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.489984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.490985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.720  [2024-12-10 00:14:09.491767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.491812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.491857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.491902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.491945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.491993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.492981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.493988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.494987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.495978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.496024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.496075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.496115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.496147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.496193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.496234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.496279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.496318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.496355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.497972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.498017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.498062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.498109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.498159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.498210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.498256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.498312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.498364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.498409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.498457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.498504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.721  [2024-12-10 00:14:09.498549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.498592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.498633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.498681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.498729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.498772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.498819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.498864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.498908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.498953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.499899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.500973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.501942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.502928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.503679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.503726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.503765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.503808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.503845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.503883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.503927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.503970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.504954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.505002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.505053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.505093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.722  [2024-12-10 00:14:09.505137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.723  [2024-12-10 00:14:09.505192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.723  [2024-12-10 00:14:09.505235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.723  [2024-12-10 00:14:09.505278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.723  [2024-12-10 00:14:09.505320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.998  [2024-12-10 00:14:09.505376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.998  [2024-12-10 00:14:09.505425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.998  [2024-12-10 00:14:09.505467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.998  [2024-12-10 00:14:09.505519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.998  [2024-12-10 00:14:09.505566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.998  [2024-12-10 00:14:09.505615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.998  [2024-12-10 00:14:09.505667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.998  [2024-12-10 00:14:09.505712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.998  [2024-12-10 00:14:09.505757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.998  [2024-12-10 00:14:09.505796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.998  [2024-12-10 00:14:09.505837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.998  [2024-12-10 00:14:09.505878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.505926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.505966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.506962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.507991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.508958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:53.999  [2024-12-10 00:14:09.509968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.510995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.511952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.512007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.512057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.512105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.512154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.512206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.512254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.512298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.512347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.512389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.512432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.512486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.512530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.512576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.513972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.514028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.514071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.514117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.514172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.000  [2024-12-10 00:14:09.514217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.514992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.515974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  Message suppressed 999 times: Read completed with error (sct=0, sc=15)
00:32:54.001  [2024-12-10 00:14:09.516808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.516991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.517729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.518161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.518216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.001  [2024-12-10 00:14:09.518255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.518983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.519965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.520888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.002  [2024-12-10 00:14:09.521816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.521858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.521899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.521937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.521977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.522016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.522059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.522098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.522141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.522192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.522236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.522279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.523973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.524999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525279] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.525840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.526074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.526115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.526159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.526207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.526259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.003  [2024-12-10 00:14:09.526317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.526979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.527964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.528835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.529367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.529420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.529463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.529509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.529552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.529600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.529645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.529696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.529743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.529793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.529837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.529881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.004  [2024-12-10 00:14:09.529924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.529969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.530961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531165] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531808] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.531974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.532016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.532062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.532102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.532137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.532975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.533959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.534003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.534046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.534092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.534140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.534191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.534233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.534292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.005  [2024-12-10 00:14:09.534338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.534987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.535983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.536956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.537969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.006  [2024-12-10 00:14:09.538013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.538652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.539548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.539593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.539632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.539673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.539711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.539740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.539780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.539821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.539855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.539895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.539943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.539992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540617] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540800] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.540975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541169] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.541964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.542001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.542038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.542077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.542116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.542160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.542209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.542252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.542291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.542336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.007  [2024-12-10 00:14:09.542499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.542541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.542580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.542620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.542662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.542703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.542745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.542788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.542827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.542866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.542907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.542947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.542985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.543894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.544511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.544563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.544610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.544657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.544701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.544748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.544792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.544835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.544875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.544905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.544947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.544991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.545973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.546011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.546053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.546095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.546137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.546184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.546221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.546260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.546301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.546344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.546386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.546422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.546463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.008  [2024-12-10 00:14:09.546504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.546538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.546580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.546619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.546653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.546696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.546740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.546788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.546833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.546880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.546927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.546978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.547968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.548692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.549975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.550021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.550066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.550114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.550171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.550217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.550262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.550312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.009  [2024-12-10 00:14:09.550355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.550400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.550449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.550497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.550541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.550587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.550638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.550687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.550733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.550787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.550832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.550874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.550928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.550971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.551980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.552982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.010  [2024-12-10 00:14:09.553921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.553969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.554013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.554057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.554102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.554145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.554192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.554677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.554727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.554770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.554816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.554862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.554910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.554954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.555995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.556992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.557982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.558026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.011  [2024-12-10 00:14:09.558087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.558132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.558180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.558228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.558923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.558972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.559989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.560966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.561990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.562038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.562084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.562135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.562182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.562228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.562272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.562327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.562368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.562415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.562472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.562512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.562558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.012  [2024-12-10 00:14:09.562604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.562645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.562691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.562741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.562788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.562837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.562894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.562937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.562977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.563974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.564012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.564054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.564093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.564134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.564180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.564217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.564258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.564302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.564345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.564384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.564423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.564909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.564963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.565989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.013  [2024-12-10 00:14:09.566797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.566837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.566877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.566918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.566959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.566999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.567595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  Message suppressed 999 times: Read completed with error (sct=0, sc=15)
00:32:54.014  [2024-12-10 00:14:09.568426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.568486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.568529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.568575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.568625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.568671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.568714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.568766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.568810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.568855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.568919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.568969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.569962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.570982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.571019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.014  [2024-12-10 00:14:09.571063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.571990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.572764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.573975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.574960] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.015  [2024-12-10 00:14:09.575685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.575724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.575767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.575819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.575866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.575911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.575958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.576976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577151] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.577563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578693] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.578975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.579984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.580030] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.580073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.580123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.580178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.580224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.580265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.016  [2024-12-10 00:14:09.580316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.580966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581565] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.581986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582242] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.582987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.583958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.584470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.584522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.584566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.584613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.584663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.584709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.584758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.584788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.584831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.017  [2024-12-10 00:14:09.584870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.584902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.584940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.584981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.585967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.586958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.587004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.587049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.587096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.587157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.587208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.588993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589469] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.018  [2024-12-10 00:14:09.589624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.589668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.589713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.589753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.589793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.589833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.589879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.589924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.589971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.590961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.591992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592119] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592739] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592784] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.592990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.593032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.593079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.593140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.593192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.593241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.593293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.593340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.593388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.593439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.593498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.593542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.593584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.019  [2024-12-10 00:14:09.593644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.594996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.595956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596686] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596728] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.596872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.597705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.597754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.597796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.597842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.597896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.597944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.597992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598230] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.598994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.599034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.599070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.599107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.599146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.599194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.020  [2024-12-10 00:14:09.599228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.599975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.600958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601005] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.601959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602568] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.602978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.603013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.603056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.603095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.603136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.603186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.603234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.603274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.603314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.603355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.603394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.603918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.603967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.604007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.604047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.604084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.604126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.604155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.604201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.604240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.604277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.021  [2024-12-10 00:14:09.604313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.604982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.605993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.606642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.607488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.607544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.607592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.607636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.607684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.607735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.607782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.607829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.607877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.607926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.607972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.608989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609175] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.022  [2024-12-10 00:14:09.609694] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.609725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.609761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.609796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.609835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.609877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.609920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.609956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.609997] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.610983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611606] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.611981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.612992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.613966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.023  [2024-12-10 00:14:09.614637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.614685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.614741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.614792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.614839] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.614887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.614931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.614984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615815] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.615984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.616028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.616060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.616101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.616138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.616185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.616235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.616275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.616324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.616365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.616408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.616453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.616494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.616535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  Message suppressed 999 times: Read completed with error (sct=0, sc=15)
00:32:54.024  [2024-12-10 00:14:09.617361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617822] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.617958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.618980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.024  [2024-12-10 00:14:09.619799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.619840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.619880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.619915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.619959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.619999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.620976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.621964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622424] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.622994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.623996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624569] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.025  [2024-12-10 00:14:09.624861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.624907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.624949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.624993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.625962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.626008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.626053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.626100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.626149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.626201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.626245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.626288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.626339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.626391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.626437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.626482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.626529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.627979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.628972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629786] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.629992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.630177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.630217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.630258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.630303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.630348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.630391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.630434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.630473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.630513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.630553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.630594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.630636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.026  [2024-12-10 00:14:09.630675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.630716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.630758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.630798] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.630837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.630881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.630912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.630957] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.630995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631123] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.631979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.632849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633462] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.633983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634511] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.634984] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635337] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.635979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.636019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.636061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.636105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.027  [2024-12-10 00:14:09.636146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.636641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.636690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.636737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.636795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.636844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.636890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.636938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.636988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.637982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.638977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.639019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.639056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.639100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.639141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.639188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.639226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.639268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.639312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.639350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.639391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.639918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.639970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640673] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640854] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.640991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641231] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641465] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.028  [2024-12-10 00:14:09.641851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.641891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.641932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.641971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.642722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.643588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.643640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.643685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.643740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.643787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.643832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.643877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.643928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.643982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.644989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.645955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646221] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646300] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646862] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.646961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.029  [2024-12-10 00:14:09.647965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.648969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649874] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.649999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.650996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.651966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.652614] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.653492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.653546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.653590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.653633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.653678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.653724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.653770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.653816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.653864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.653906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.653952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.654000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.654045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.654092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.654144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.654203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.654249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.654292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.030  [2024-12-10 00:14:09.654332] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654484] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654608] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.654990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655364] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.655963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656010] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656530] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.656962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657630] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.657988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658249] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658281] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658515] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  [2024-12-10 00:14:09.658595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.031  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:54.031   00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:54.031  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:54.031  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:54.031  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:54.031  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:54.310  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:54.310  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:54.310  [2024-12-10 00:14:09.893518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.893595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.893643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.893691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.893735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.893781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.893820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.893864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.893919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.893962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894417] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894732] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.894962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.310  [2024-12-10 00:14:09.895004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.895979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.896748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.897966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.898025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.898077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.898118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.898156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.311  [2024-12-10 00:14:09.898199] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.898964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899083] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.899924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.312  [2024-12-10 00:14:09.900708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.900747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.900779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.900820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.900855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.900896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.900934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.900973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901382] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901519] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.901980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.902026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.902074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.902124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.902171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.902213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.902258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.902301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.902341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.902385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.902952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903191] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903654] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.313  [2024-12-10 00:14:09.903746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.903791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.903831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.903890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.903933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.903981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904760] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904797] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.904992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905133] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905557] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.905990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.906028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.906067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.906103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.906765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.906804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.906848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.906889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.906929] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.314  [2024-12-10 00:14:09.906968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907806] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.907978] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908171] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.908974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909067] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.315  [2024-12-10 00:14:09.909813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.909860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.909909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.909951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.909996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910266] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910713] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.910964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.911934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.912476] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.912532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.912577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.912622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.912672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.912717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.912756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.912789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.912827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.912866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.912906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.912945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.912985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.913025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.913064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.913098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.913137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.316  [2024-12-10 00:14:09.913185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.913982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914683] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.914976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.915778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.916431] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.916481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.916535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.916579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.916629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.916679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.916722] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.317  [2024-12-10 00:14:09.916767] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.916812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.916851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.916893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.916934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.916981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917265] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917548] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.917958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918473] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918593] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.918983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919074] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919362] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919865] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.919983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.920024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.318  [2024-12-10 00:14:09.920062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.920951] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.921003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.921049] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.921098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.921149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.921201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.921251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.921304] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.921350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.921397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.921439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.921491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.921539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.921586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922109] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922509] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.922973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923377] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923416] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923526] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923640] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923813] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923900] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.319  [2024-12-10 00:14:09.923992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924177] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924467] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924516] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.924893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.925076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.925127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.925176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.925218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.925252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.925289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.925328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.925366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.925407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926022] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926156] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926908] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.926955] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927335] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.320  [2024-12-10 00:14:09.927945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928612] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.928980] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.929019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.929060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.929102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321   00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004
00:32:54.321  [2024-12-10 00:14:09.929145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.929198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321   00:14:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004
00:32:54.321  Message suppressed 999 times: Read completed with error (sct=0, sc=15)
00:32:54.321  [2024-12-10 00:14:09.929646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.929697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.929748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.929794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.929838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.929887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.929932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.929982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930073] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930116] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.930970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931393] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931440] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.321  [2024-12-10 00:14:09.931976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932269] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932402] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932942] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.932985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933592] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933720] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933802] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933903] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.933999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934042] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934388] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.934999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935179] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935220] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935426] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.322  [2024-12-10 00:14:09.935778] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.935817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.935855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.935893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.935935] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.935973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.936013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.936052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.936092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.936125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.936163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.936357] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.936401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.936444] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.936488] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.936534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.936581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.936627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937164] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937308] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937875] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.937967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938057] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938142] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938705] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938938] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.938976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.939026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.939069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.939107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.939145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.939193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.939234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.939277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.939326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.939375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.939408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.323  [2024-12-10 00:14:09.939450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.939486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.939523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.939564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.939603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.939648] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.939696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.939741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.939780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.939819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.939860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940046] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940261] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940346] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.940987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941036] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941129] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941270] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941497] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.941982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942563] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942647] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942724] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.942995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.943028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.943062] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.943107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.943147] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.943189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.943234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.943278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.943331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.943380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.324  [2024-12-10 00:14:09.943429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.943474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.943672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.943719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.943770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.943816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.943863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.943911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.943959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944496] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944586] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.944991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945060] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945427] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945583] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945748] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.945989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946210] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946696] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.946977] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.947020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.947063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.947117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.947312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.947358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.947411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.947455] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.947502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.947554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.947599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.947652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.325  [2024-12-10 00:14:09.947700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.947749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.947790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.947838] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.947887] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.947931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.947979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948546] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948594] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948752] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.948959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949087] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949172] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949539] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.949702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950637] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950824] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.950965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951118] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951255] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951348] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.326  [2024-12-10 00:14:09.951807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.951856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.951902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.951943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952001] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952244] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952651] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952810] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.952973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.953013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.953058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.953105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.953290] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.953330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.953367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.953409] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.953441] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.953486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.953520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954245] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954291] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954394] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954571] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954613] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954661] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.954987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955127] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955218] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955435] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955478] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955763] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955931] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.955970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.956013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.956047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.956094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.956137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.327  [2024-12-10 00:14:09.956180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956373] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956458] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956500] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956620] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956783] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.956917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.957099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.957146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.957197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.957248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.957296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.957341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.957383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.957429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.957475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.957915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.957964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958657] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958704] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958936] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.958975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959055] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959621] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959664] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959790] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.328  [2024-12-10 00:14:09.959871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.959907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.959949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.959985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960054] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960093] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960216] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960298] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960528] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960577] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960623] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960901] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960952] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.960996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961656] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961703] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961921] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.961996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962033] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962201] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962350] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962472] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962554] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962889] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.962983] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.963031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.963080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.963126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.963671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.963723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.963769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.963826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.963869] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.963913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.963961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.964006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.329  [2024-12-10 00:14:09.964053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964286] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964375] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964527] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.964966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965132] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965425] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965751] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965896] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.965974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966558] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966604] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.966933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.967566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.967619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.967655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.967692] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.967729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.967771] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.967809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.967842] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.967883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.967926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.967965] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.968004] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.968044] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.968088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.968131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.968181] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.968213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.330  [2024-12-10 00:14:09.968252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968288] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968328] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968411] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968453] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968536] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968580] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.968972] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969490] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969681] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.969968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970207] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970278] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970655] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.970970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971086] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971163] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971253] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971295] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971334] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971412] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971487] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971710] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.331  [2024-12-10 00:14:09.971801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.971844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.971894] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.971941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.971990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972038] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972323] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972459] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.972737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973262] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973709] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973745] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973871] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.973982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974103] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974144] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974235] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974273] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974313] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974605] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974884] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974928] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.974974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975020] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975352] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975401] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975494] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.332  [2024-12-10 00:14:09.975826] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.975876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.975919] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.975966] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.976012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.976192] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.976237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.976274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.976314] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.976353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.976389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.976429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.976468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.976507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977436] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977479] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977582] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977610] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977684] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977725] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977850] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.977974] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978024] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978069] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978256] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978306] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978351] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978717] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978773] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978821] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.978961] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979104] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979533] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979576] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979890] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.979932] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.980096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.980140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.980183] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.333  [2024-12-10 00:14:09.980224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.980264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.980305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.980359] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  Message suppressed 999 times: Read completed with error (sct=0, sc=15)
00:32:54.334  [2024-12-10 00:14:09.980774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.980820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.980860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.980899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.980939] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.980976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981094] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981134] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981345] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981434] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981483] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981529] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981578] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981671] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981899] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.981956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982047] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982100] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982250] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982395] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982584] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982631] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982764] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982956] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.982994] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983028] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983113] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983449] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983524] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983779] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.334  [2024-12-10 00:14:09.983825] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.983864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.983904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.983943] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.983976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.984019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.984061] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.984561] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.984603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.984636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.984676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.984718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.984757] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.984799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.984841] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.984880] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.984922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.984968] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985302] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985445] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985537] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985674] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985766] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985906] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.985998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986372] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.986953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987280] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987320] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987371] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987521] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987560] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987597] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987675] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987714] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.335  [2024-12-10 00:14:09.987993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988071] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988222] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988342] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988384] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988433] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988482] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988706] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988755] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988848] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.988992] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989089] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989338] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989386] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989438] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989622] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989666] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989807] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989853] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.989946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.990121] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.990173] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.990213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.990260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.990303] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.990343] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.990376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.990420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.990456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991241] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991741] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991788] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.991971] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.992017] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.992066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.992108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.992155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.992211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.992257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.992301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.992349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.336  [2024-12-10 00:14:09.992399] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992491] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992712] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992817] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992892] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.992975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993014] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993137] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993184] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993225] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993340] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993379] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993588] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993627] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993791] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.993834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994019] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994051] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994090] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994197] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994390] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994432] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994477] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994716] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994852] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.994993] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995082] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995178] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995223] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995316] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995363] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995415] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995457] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995504] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995600] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995737] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995781] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995873] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.337  [2024-12-10 00:14:09.995998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.996037] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.996075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.996114] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.996154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.996206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.996236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.996272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.996310] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.996831] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.996877] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.996915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.996959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.996996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997263] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997309] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997354] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997446] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997534] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997581] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997626] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997669] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997711] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997761] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997809] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997855] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997950] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.997998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998041] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998131] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998275] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998366] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998549] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998635] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998730] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998829] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.998963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999008] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999053] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999130] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999211] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999293] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999452] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999538] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999619] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999794] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999832] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999915] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:09.999999] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:10.000039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:10.000078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.338  [2024-12-10 00:14:10.000120] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.000986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001145] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001196] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001240] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001378] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001471] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001514] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001602] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001777] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001863] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.001958] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002000] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002045] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002095] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002284] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002327] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002370] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002506] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002556] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002729] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002765] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002844] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002926] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.002970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003009] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003059] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003107] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003198] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003276] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003307] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003347] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003423] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003540] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003579] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003618] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003700] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003740] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.003989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.004032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.004081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.004125] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.004180] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.339  [2024-12-10 00:14:10.004224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004644] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004795] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004837] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004886] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004930] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.004973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005023] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005157] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005268] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005451] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005498] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005726] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005785] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005827] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.005975] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.006025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.006068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.006110] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.006149] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.006202] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.006243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.006282] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.006322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.006358] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.006870] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.006918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.006954] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.006988] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007081] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007219] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007344] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007385] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007510] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007552] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007591] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007670] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.007970] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.008018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.008064] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.008105] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.008158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.008214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.008264] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.008312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.008360] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.340  [2024-12-10 00:14:10.008403] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.008447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.008492] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.008535] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.008585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.008632] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.008676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.008721] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.008775] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.008820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.008867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.008917] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.008949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.008995] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009034] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009111] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009203] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009247] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009555] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009858] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.009981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.010025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.010088] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.010155] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011076] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011160] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011237] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011318] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011383] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011541] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011624] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011682] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011738] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011864] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011937] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.011998] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.012079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.012176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.012234] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.012317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.012389] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.012454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.012523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.012603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.012689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.012805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.012893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013158] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013499] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013544] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013599] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013653] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013702] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013804] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013867] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.013969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.014011] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.014056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.014106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.014152] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.014204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.341  [2024-12-10 00:14:10.014251] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014292] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014442] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014542] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014645] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014750] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014805] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014905] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.014963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015012] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015080] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015136] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015475] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015567] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015607] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.015985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016182] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016243] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016341] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016428] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016517] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016598] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016758] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016836] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016914] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016944] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.016982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017016] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017099] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017187] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017229] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017312] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017574] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.017668] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.018217] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.018272] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.018315] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.018361] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.018407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.018450] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.018501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.018543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.342  [2024-12-10 00:14:10.018589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.018633] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.018678] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.018723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.018768] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.018812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.018860] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.018902] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.018945] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.018989] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019032] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019077] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019176] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019226] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019408] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019589] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019679] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019718] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019845] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019876] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019909] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.019990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020066] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020148] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020239] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020287] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020330] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020369] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020406] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020486] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020525] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020566] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020611] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020659] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020701] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020742] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020895] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.020925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.021117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.021161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.021205] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.021248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.021285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.021319] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.021353] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.021391] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.021429] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022215] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022257] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022305] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022349] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022397] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022439] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022485] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022531] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022575] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022609] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022723] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022759] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.343  [2024-12-10 00:14:10.022801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.022843] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.022882] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.022922] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.022964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023070] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023143] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023188] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023227] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023267] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023311] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023356] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023398] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023437] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023474] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023512] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023551] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023667] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023708] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023782] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023819] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023891] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023923] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.023963] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024007] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024050] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024138] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024190] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024285] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024413] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024456] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024501] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024634] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024677] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024857] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024898] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.024962] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025048] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025092] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025135] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025195] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025329] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025410] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025495] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025545] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025595] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025638] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025680] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025770] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025816] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025859] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025904] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.025996] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.344  [2024-12-10 00:14:10.026040] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026085] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026146] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026193] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026238] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026392] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026470] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026518] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026562] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026642] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026685] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026715] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026792] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026918] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.026964] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.027002] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.027039] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.027078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.027117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.027153] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.027691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.027734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.027776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.027818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.027856] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.027897] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.027934] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.027976] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028015] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028052] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028091] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028128] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028228] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028274] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028325] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028374] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028420] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028523] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028662] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028762] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028801] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028888] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028933] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.028979] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029027] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029068] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029098] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029139] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029186] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029232] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029283] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029365] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029407] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029448] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029489] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029532] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029573] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029652] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029733] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029772] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029811] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029846] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029927] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.345  [2024-12-10 00:14:10.029973] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030018] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030112] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030159] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030209] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030259] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030321] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030414] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030464] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030650] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030697] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030753] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030799] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030847] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030941] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.030987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.031035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.031695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.031744] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.031787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.031833] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.031878] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.031925] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.031967] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032013] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032058] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032101] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032254] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032297] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032404] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032447] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032502] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032547] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032643] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032689] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032823] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032868] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.032985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033075] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033122] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033161] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033208] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033294] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033324] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033405] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033443] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033481] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033520] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033601] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033687] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033769] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033840] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033883] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033924] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.033981] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.034031] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.034079] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.034124] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.034162] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.034212] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.034258] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.034301] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.034336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.034376] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.034418] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.346  [2024-12-10 00:14:10.034454] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.034629] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  Message suppressed 999 times: [2024-12-10 00:14:10.034672] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  Read completed with error (sct=0, sc=15)
00:32:54.347  [2024-12-10 00:14:10.034734] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.034774] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.034818] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.034866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.034907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.034948] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.034990] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035029] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035072] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035115] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035200] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035246] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035336] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035466] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035513] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035564] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035616] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035676] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035747] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035812] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035912] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.035986] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.036108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.036204] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.036289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.036396] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.036493] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.036590] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.036727] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.036814] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.036885] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.036946] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037006] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037108] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037213] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037322] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037480] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037570] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037636] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037746] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037796] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037849] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037893] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037940] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.037987] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.038043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.038096] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.038695] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.038756] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.038803] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.038851] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.038907] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.038969] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039025] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039065] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039106] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039154] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039206] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039248] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039289] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039333] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039380] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039419] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039460] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039503] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039553] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039649] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039691] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039731] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039776] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.347  [2024-12-10 00:14:10.039820] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.039866] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.039911] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.039949] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.039985] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040021] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040056] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040097] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040141] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040185] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040224] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040260] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040296] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040331] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040368] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040400] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040430] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040468] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040507] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040550] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040587] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040628] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040665] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040707] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040754] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040793] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040835] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040879] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040916] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040953] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.040991] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041035] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041078] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041117] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041150] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041194] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041236] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041277] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041326] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041367] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041596] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041641] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041688] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041735] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041780] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041830] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041872] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.041913] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.042522] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.042572] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.042615] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.042658] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.042699] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.042743] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.042789] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.042834] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.042881] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.042920] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.042959] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043003] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043043] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043084] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043126] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043174] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043214] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043252] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043299] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043339] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043381] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043421] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043461] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043508] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043559] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043603] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043646] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043690] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043719] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043749] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043787] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043828] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043861] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043910] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043947] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.043982] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.044026] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.044063] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.044102] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.044140] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.348  [2024-12-10 00:14:10.044189] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044233] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044271] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044317] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044355] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044387] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044422] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044463] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044505] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044543] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044585] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044625] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044663] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044698] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  [2024-12-10 00:14:10.044736] ctrlr_bdev.c: 384:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1
00:32:54.349  true
00:32:54.349   00:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:32:54.349   00:14:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:55.285   00:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:55.543   00:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005
00:32:55.543   00:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005
00:32:55.801  true
00:32:55.801   00:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:32:55.801   00:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:56.058   00:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:56.058   00:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006
00:32:56.058   00:14:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006
00:32:56.316  true
00:32:56.316   00:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:32:56.316   00:14:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:57.691  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:57.691   00:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:57.692  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:57.692   00:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007
00:32:57.692   00:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007
00:32:57.950  true
00:32:57.950   00:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:32:57.950   00:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:57.950   00:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:58.208   00:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008
00:32:58.208   00:14:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008
00:32:58.467  true
00:32:58.467   00:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:32:58.467   00:14:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:32:59.404  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:59.665   00:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:32:59.665  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:59.665  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:59.665  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:59.665  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:59.665  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:59.665  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:32:59.665   00:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009
00:32:59.665   00:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009
00:32:59.923  true
00:32:59.923   00:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:32:59.923   00:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:00.860   00:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:00.860   00:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010
00:33:00.860   00:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010
00:33:01.119  true
00:33:01.119   00:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:01.119   00:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:01.377   00:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:01.645   00:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011
00:33:01.645   00:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011
00:33:01.645  true
00:33:01.645   00:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:01.645   00:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:03.026  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:03.026   00:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:03.026  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:03.026  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:03.026  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:03.026  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:03.026  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:03.026   00:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012
00:33:03.026   00:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012
00:33:03.284  true
00:33:03.284   00:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:03.284   00:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:04.219   00:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:04.219   00:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013
00:33:04.219   00:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013
00:33:04.478  true
00:33:04.478   00:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:04.478   00:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:04.737   00:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:04.737   00:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014
00:33:04.737   00:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014
00:33:04.995  true
00:33:04.996   00:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:04.996   00:14:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:05.939  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:05.939   00:14:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:06.198  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:06.198  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:06.198  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:06.198  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:06.198  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:06.198   00:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015
00:33:06.198   00:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015
00:33:06.456  true
00:33:06.456   00:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:06.456   00:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:07.393   00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:07.393   00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016
00:33:07.393   00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016
00:33:07.651  true
00:33:07.651   00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:07.651   00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:07.910   00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:08.169   00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017
00:33:08.169   00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017
00:33:08.169  true
00:33:08.169   00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:08.169   00:14:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:09.544  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:09.544   00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:09.544  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:09.544  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:09.544   00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018
00:33:09.544   00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018
00:33:09.803  true
00:33:09.803   00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:09.803   00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:09.803   00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:10.062   00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019
00:33:10.062   00:14:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019
00:33:10.320  true
00:33:10.320   00:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:10.320   00:14:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:11.697  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:11.697   00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:11.697  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:11.697  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:11.697  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:11.697  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:11.697  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:11.697   00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020
00:33:11.697   00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020
00:33:11.956  true
00:33:11.956   00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:11.956   00:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:12.894   00:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:12.894   00:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021
00:33:12.894   00:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021
00:33:13.152  true
00:33:13.152   00:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:13.152   00:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:13.153   00:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:13.411   00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022
00:33:13.411   00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022
00:33:13.671  true
00:33:13.671   00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:13.671   00:14:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:14.662  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:14.985   00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:14.985  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:14.985  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:14.985  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:14.985  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:14.985  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:14.985  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:14.985   00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023
00:33:14.985   00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023
00:33:15.244  true
00:33:15.244   00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:15.244   00:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:16.179   00:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:16.179   00:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024
00:33:16.179   00:14:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024
00:33:16.438  true
00:33:16.438   00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:16.438   00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:16.696   00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:16.696   00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025
00:33:16.696   00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025
00:33:16.955  true
00:33:16.955   00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:16.955   00:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:18.330  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:18.330   00:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:18.330  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:18.330  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:18.330  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:18.330  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:18.330  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:18.330   00:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026
00:33:18.330   00:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026
00:33:18.330  true
00:33:18.589   00:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:18.589   00:14:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:19.524   00:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:19.524   00:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027
00:33:19.524   00:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027
00:33:19.782  true
00:33:19.782   00:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:19.782   00:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:19.782   00:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:20.041   00:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028
00:33:20.041   00:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028
00:33:20.299  true
00:33:20.299   00:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:20.299   00:14:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:21.235  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:21.494   00:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:21.494  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:21.494  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:21.494  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:21.494  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:21.494  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:21.494  Message suppressed 999 times: Read completed with error (sct=0, sc=11)
00:33:21.494   00:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029
00:33:21.494   00:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029
00:33:21.754  true
00:33:21.754   00:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:21.754   00:14:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:22.691   00:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:22.691  Initializing NVMe Controllers
00:33:22.691  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:33:22.691  Controller IO queue size 128, less than required.
00:33:22.691  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:33:22.691  Controller IO queue size 128, less than required.
00:33:22.691  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:33:22.691  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:33:22.691  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0
00:33:22.691  Initialization complete. Launching workers.
00:33:22.691  ========================================================
00:33:22.691                                                                                                               Latency(us)
00:33:22.691  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:33:22.691  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:    2508.20       1.22   34552.31    1996.29 1023193.26
00:33:22.691  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core  0:   17665.70       8.63    7223.48    1534.70  368150.81
00:33:22.691  ========================================================
00:33:22.691  Total                                                                    :   20173.90       9.85   10621.25    1534.70 1023193.26
00:33:22.691  
00:33:22.691   00:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030
00:33:22.691   00:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030
00:33:22.950  true
00:33:22.950   00:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3259976
00:33:22.950  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3259976) - No such process
00:33:22.950   00:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3259976
00:33:22.950   00:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:23.208   00:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:33:23.467   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8
00:33:23.467   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=()
00:33:23.467   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 ))
00:33:23.467   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:33:23.467   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096
00:33:23.467  null0
00:33:23.467   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:33:23.467   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:33:23.467   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096
00:33:23.725  null1
00:33:23.725   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:33:23.725   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:33:23.725   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096
00:33:23.993  null2
00:33:23.993   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:33:23.994   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:33:23.994   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096
00:33:24.252  null3
00:33:24.252   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:33:24.252   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:33:24.252   00:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096
00:33:24.252  null4
00:33:24.252   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:33:24.252   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:33:24.252   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096
00:33:24.511  null5
00:33:24.511   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:33:24.511   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:33:24.511   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096
00:33:24.770  null6
00:33:24.770   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:33:24.770   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:33:24.770   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096
00:33:25.030  null7
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.030   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!)
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3265236 3265238 3265241 3265244 3265247 3265250 3265252 3265255
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.031   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:33:25.290   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:25.290   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:33:25.290   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:33:25.290   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:33:25.290   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:33:25.290   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:33:25.290   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:33:25.290   00:14:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:33:25.290   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.290   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.290   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:33:25.290   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.290   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.290   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.290   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.290   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:33:25.290   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:33:25.290   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.291   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:33:25.549   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:33:25.549   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:25.549   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:33:25.549   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:33:25.550   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:33:25.550   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:33:25.550   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:33:25.550   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.808   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:33:25.809   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.809   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.809   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.809   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.809   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:33:25.809   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:33:25.809   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:25.809   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:25.809   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:33:26.068   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:33:26.068   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:26.068   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:33:26.069   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:33:26.069   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:33:26.069   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:33:26.069   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:33:26.069   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:33:26.069   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.069   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.069   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:33:26.069   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.069   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.069   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.329   00:14:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:33:26.329   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:26.329   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:33:26.329   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:33:26.329   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:33:26.329   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:33:26.329   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:33:26.329   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:33:26.329   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:26.588   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:33:26.847   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:26.847   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:33:26.847   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:33:26.847   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:33:26.847   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:33:26.847   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:33:26.847   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:33:26.847   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:33:27.106   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:33:27.365   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:33:27.365   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:33:27.365   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:33:27.365   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:33:27.365   00:14:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:33:27.365   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.366   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.366   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:33:27.624   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:33:27.624   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:27.624   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:33:27.624   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:33:27.624   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:33:27.624   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:33:27.624   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:33:27.624   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:27.883   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.141   00:14:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:33:28.401   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:28.401   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:33:28.401   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:33:28.401   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:33:28.401   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:33:28.401   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:33:28.401   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:33:28.401   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.670   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3
00:33:28.671   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.671   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.671   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4
00:33:28.935   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:33:28.935   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3
00:33:28.935   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8
00:33:28.935   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6
00:33:28.935   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7
00:33:28.935   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4
00:33:28.935   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5
00:33:28.935   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2
00:33:28.935   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.935   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:28.935   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:28.935   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i ))
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 ))
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20}
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:33:29.194  rmmod nvme_tcp
00:33:29.194  rmmod nvme_fabrics
00:33:29.194  rmmod nvme_keyring
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3259501 ']'
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3259501
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3259501 ']'
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3259501
00:33:29.194    00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:29.194    00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3259501
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3259501'
00:33:29.194  killing process with pid 3259501
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3259501
00:33:29.194   00:14:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3259501
00:33:29.453   00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:33:29.453   00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:33:29.453   00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:33:29.453   00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr
00:33:29.453   00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save
00:33:29.453   00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:33:29.453   00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore
00:33:29.453   00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:33:29.453   00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns
00:33:29.453   00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:33:29.453   00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:33:29.453    00:14:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:33:31.357   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:33:31.357  
00:33:31.357  real	0m48.366s
00:33:31.357  user	2m58.549s
00:33:31.357  sys	0m19.786s
00:33:31.357   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable
00:33:31.357   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x
00:33:31.357  ************************************
00:33:31.357  END TEST nvmf_ns_hotplug_stress
00:33:31.357  ************************************
00:33:31.617   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode
00:33:31.617   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:33:31.617   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:33:31.617   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:33:31.617  ************************************
00:33:31.617  START TEST nvmf_delete_subsystem
00:33:31.617  ************************************
00:33:31.617   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode
00:33:31.617  * Looking for test storage...
00:33:31.617  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:33:31.617    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:33:31.617     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version
00:33:31.617     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:33:31.617    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-:
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-:
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<'
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 ))
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:33:31.618  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:31.618  		--rc genhtml_branch_coverage=1
00:33:31.618  		--rc genhtml_function_coverage=1
00:33:31.618  		--rc genhtml_legend=1
00:33:31.618  		--rc geninfo_all_blocks=1
00:33:31.618  		--rc geninfo_unexecuted_blocks=1
00:33:31.618  		
00:33:31.618  		'
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:33:31.618  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:31.618  		--rc genhtml_branch_coverage=1
00:33:31.618  		--rc genhtml_function_coverage=1
00:33:31.618  		--rc genhtml_legend=1
00:33:31.618  		--rc geninfo_all_blocks=1
00:33:31.618  		--rc geninfo_unexecuted_blocks=1
00:33:31.618  		
00:33:31.618  		'
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:33:31.618  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:31.618  		--rc genhtml_branch_coverage=1
00:33:31.618  		--rc genhtml_function_coverage=1
00:33:31.618  		--rc genhtml_legend=1
00:33:31.618  		--rc geninfo_all_blocks=1
00:33:31.618  		--rc geninfo_unexecuted_blocks=1
00:33:31.618  		
00:33:31.618  		'
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:33:31.618  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:31.618  		--rc genhtml_branch_coverage=1
00:33:31.618  		--rc genhtml_function_coverage=1
00:33:31.618  		--rc genhtml_legend=1
00:33:31.618  		--rc geninfo_all_blocks=1
00:33:31.618  		--rc geninfo_unexecuted_blocks=1
00:33:31.618  		
00:33:31.618  		'
00:33:31.618   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:33:31.618    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:33:31.618     00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:33:31.618      00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:31.618      00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:31.619      00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:31.619      00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH
00:33:31.619      00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:31.619    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0
00:33:31.619    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:33:31.619    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:33:31.619    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:33:31.619    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:33:31.619    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:33:31.619    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:33:31.619    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:33:31.619    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:33:31.619    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:33:31.619    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0
00:33:31.619   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit
00:33:31.619   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:33:31.619   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:33:31.619   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs
00:33:31.619   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no
00:33:31.619   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns
00:33:31.619   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:33:31.619   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:33:31.619    00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:33:31.879   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:33:31.879   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:33:31.879   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable
00:33:31.879   00:14:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=()
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=()
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=()
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=()
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=()
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=()
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=()
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:33:38.452  Found 0000:af:00.0 (0x8086 - 0x159b)
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:33:38.452  Found 0000:af:00.1 (0x8086 - 0x159b)
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:33:38.452  Found net devices under 0000:af:00.0: cvl_0_0
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:33:38.452  Found net devices under 0000:af:00.1: cvl_0_1
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:33:38.452   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:33:38.453  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:33:38.453  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms
00:33:38.453  
00:33:38.453  --- 10.0.0.2 ping statistics ---
00:33:38.453  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:38.453  rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:33:38.453  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:33:38.453  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms
00:33:38.453  
00:33:38.453  --- 10.0.0.1 ping statistics ---
00:33:38.453  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:38.453  rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3269481
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3269481
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3269481 ']'
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:33:38.453  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:38.453  [2024-12-10 00:14:53.400730] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:33:38.453  [2024-12-10 00:14:53.401586] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:33:38.453  [2024-12-10 00:14:53.401616] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:33:38.453  [2024-12-10 00:14:53.474354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:33:38.453  [2024-12-10 00:14:53.522382] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:33:38.453  [2024-12-10 00:14:53.522419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:33:38.453  [2024-12-10 00:14:53.522426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:33:38.453  [2024-12-10 00:14:53.522432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:33:38.453  [2024-12-10 00:14:53.522437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:33:38.453  [2024-12-10 00:14:53.523532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:33:38.453  [2024-12-10 00:14:53.523535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:33:38.453  [2024-12-10 00:14:53.591411] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:33:38.453  [2024-12-10 00:14:53.591991] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:33:38.453  [2024-12-10 00:14:53.592187] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:38.453   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:38.453  [2024-12-10 00:14:53.672337] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:38.454  [2024-12-10 00:14:53.700656] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:38.454  NULL1
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:38.454  Delay0
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3269704
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2
00:33:38.454   00:14:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4
00:33:38.454  [2024-12-10 00:14:53.811061] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:33:40.361   00:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:33:40.361   00:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:40.361   00:14:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  [2024-12-10 00:14:55.982891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc9780 is same with the state(6) to be set
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Write completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  Read completed with error (sct=0, sc=8)
00:33:40.361  starting I/O failed: -6
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  starting I/O failed: -6
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  starting I/O failed: -6
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  starting I/O failed: -6
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  starting I/O failed: -6
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  [2024-12-10 00:14:55.984366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4224000c80 is same with the state(6) to be set
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Write completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:40.362  Read completed with error (sct=0, sc=8)
00:33:41.298  [2024-12-10 00:14:56.948062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cca9b0 is same with the state(6) to be set
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Write completed with error (sct=0, sc=8)
00:33:41.298  Write completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Write completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Write completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Write completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Write completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  [2024-12-10 00:14:56.987053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc92c0 is same with the state(6) to be set
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Write completed with error (sct=0, sc=8)
00:33:41.298  Write completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.298  Write completed with error (sct=0, sc=8)
00:33:41.298  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  [2024-12-10 00:14:56.987193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc9960 is same with the state(6) to be set
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  [2024-12-10 00:14:56.987422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f422400d800 is same with the state(6) to be set
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Write completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  Read completed with error (sct=0, sc=8)
00:33:41.299  [2024-12-10 00:14:56.988009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f422400d060 is same with the state(6) to be set
00:33:41.299  Initializing NVMe Controllers
00:33:41.299  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:33:41.299  Controller IO queue size 128, less than required.
00:33:41.299  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:33:41.299  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:33:41.299  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:33:41.299  Initialization complete. Launching workers.
00:33:41.299  ========================================================
00:33:41.299                                                                                                               Latency(us)
00:33:41.299  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:33:41.299  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     171.83       0.08  892123.06     329.62 1011243.86
00:33:41.299  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     161.90       0.08  913344.09     255.66 1012602.20
00:33:41.299  ========================================================
00:33:41.299  Total                                                                    :     333.73       0.16  902417.79     255.66 1012602.20
00:33:41.299  
00:33:41.299  [2024-12-10 00:14:56.988555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cca9b0 (9): Bad file descriptor
00:33:41.299  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred
00:33:41.299   00:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:41.299   00:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0
00:33:41.299   00:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3269704
00:33:41.299   00:14:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 ))
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3269704
00:33:41.867  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3269704) - No such process
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3269704
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3269704
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:41.867    00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3269704
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:41.867  [2024-12-10 00:14:57.516519] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3270162
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3270162
00:33:41.867   00:14:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:33:41.867  [2024-12-10 00:14:57.597999] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem.  This behavior is deprecated and will be removed in a future release.
00:33:42.435   00:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:33:42.435   00:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3270162
00:33:42.435   00:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:33:42.693   00:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:33:42.693   00:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3270162
00:33:42.693   00:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:33:43.261   00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:33:43.261   00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3270162
00:33:43.261   00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:33:43.834   00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:33:43.834   00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3270162
00:33:43.834   00:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:33:44.401   00:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:33:44.402   00:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3270162
00:33:44.402   00:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:33:44.969   00:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:33:44.969   00:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3270162
00:33:44.969   00:15:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5
00:33:44.969  Initializing NVMe Controllers
00:33:44.969  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:33:44.969  Controller IO queue size 128, less than required.
00:33:44.969  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:33:44.969  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:33:44.969  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:33:44.969  Initialization complete. Launching workers.
00:33:44.969  ========================================================
00:33:44.969                                                                                                               Latency(us)
00:33:44.969  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:33:44.969  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:     128.00       0.06 1002809.19 1000166.19 1008034.01
00:33:44.969  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:     128.00       0.06 1004649.09 1000222.01 1041163.85
00:33:44.969  ========================================================
00:33:44.969  Total                                                                    :     256.00       0.12 1003729.14 1000166.19 1041163.85
00:33:44.969  
00:33:45.228   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 ))
00:33:45.228   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3270162
00:33:45.228  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3270162) - No such process
00:33:45.228   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3270162
00:33:45.228   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:33:45.228   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini
00:33:45.228   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup
00:33:45.228   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync
00:33:45.228   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:33:45.228   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e
00:33:45.228   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20}
00:33:45.228   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:33:45.228  rmmod nvme_tcp
00:33:45.487  rmmod nvme_fabrics
00:33:45.487  rmmod nvme_keyring
00:33:45.487   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:33:45.487   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e
00:33:45.487   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0
00:33:45.487   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3269481 ']'
00:33:45.487   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3269481
00:33:45.487   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3269481 ']'
00:33:45.487   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3269481
00:33:45.487    00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname
00:33:45.487   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:45.487    00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3269481
00:33:45.487   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:33:45.487   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:33:45.487   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3269481'
00:33:45.487  killing process with pid 3269481
00:33:45.487   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3269481
00:33:45.488   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3269481
00:33:45.746   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:33:45.747   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:33:45.747   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:33:45.747   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr
00:33:45.747   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save
00:33:45.747   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:33:45.747   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore
00:33:45.747   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:33:45.747   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns
00:33:45.747   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:33:45.747   00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:33:45.747    00:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:33:47.650   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:33:47.650  
00:33:47.650  real	0m16.181s
00:33:47.650  user	0m26.355s
00:33:47.650  sys	0m6.018s
00:33:47.650   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable
00:33:47.650   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x
00:33:47.650  ************************************
00:33:47.650  END TEST nvmf_delete_subsystem
00:33:47.650  ************************************
00:33:47.650   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode
00:33:47.650   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:33:47.650   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:33:47.650   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:33:47.910  ************************************
00:33:47.910  START TEST nvmf_host_management
00:33:47.910  ************************************
00:33:47.910   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode
00:33:47.910  * Looking for test storage...
00:33:47.910  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:33:47.910     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version
00:33:47.910     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-:
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-:
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<'
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 ))
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:33:47.910     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1
00:33:47.910     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1
00:33:47.910     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:33:47.910     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1
00:33:47.910     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2
00:33:47.910     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2
00:33:47.910     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:33:47.910     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:33:47.910  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:47.910  		--rc genhtml_branch_coverage=1
00:33:47.910  		--rc genhtml_function_coverage=1
00:33:47.910  		--rc genhtml_legend=1
00:33:47.910  		--rc geninfo_all_blocks=1
00:33:47.910  		--rc geninfo_unexecuted_blocks=1
00:33:47.910  		
00:33:47.910  		'
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:33:47.910  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:47.910  		--rc genhtml_branch_coverage=1
00:33:47.910  		--rc genhtml_function_coverage=1
00:33:47.910  		--rc genhtml_legend=1
00:33:47.910  		--rc geninfo_all_blocks=1
00:33:47.910  		--rc geninfo_unexecuted_blocks=1
00:33:47.910  		
00:33:47.910  		'
00:33:47.910    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:33:47.910  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:47.911  		--rc genhtml_branch_coverage=1
00:33:47.911  		--rc genhtml_function_coverage=1
00:33:47.911  		--rc genhtml_legend=1
00:33:47.911  		--rc geninfo_all_blocks=1
00:33:47.911  		--rc geninfo_unexecuted_blocks=1
00:33:47.911  		
00:33:47.911  		'
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:33:47.911  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:33:47.911  		--rc genhtml_branch_coverage=1
00:33:47.911  		--rc genhtml_function_coverage=1
00:33:47.911  		--rc genhtml_legend=1
00:33:47.911  		--rc geninfo_all_blocks=1
00:33:47.911  		--rc geninfo_unexecuted_blocks=1
00:33:47.911  		
00:33:47.911  		'
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:33:47.911     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:33:47.911     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:33:47.911     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob
00:33:47.911     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:33:47.911     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:33:47.911     00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:33:47.911      00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:47.911      00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:47.911      00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:47.911      00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH
00:33:47.911      00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:33:47.911    00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable
00:33:47.911   00:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=()
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=()
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=()
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=()
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=()
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=()
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=()
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:33:54.478  Found 0000:af:00.0 (0x8086 - 0x159b)
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:33:54.478  Found 0000:af:00.1 (0x8086 - 0x159b)
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:33:54.478   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]]
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:33:54.479  Found net devices under 0000:af:00.0: cvl_0_0
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]]
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:33:54.479  Found net devices under 0000:af:00.1: cvl_0_1
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:33:54.479  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:33:54.479  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms
00:33:54.479  
00:33:54.479  --- 10.0.0.2 ping statistics ---
00:33:54.479  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:54.479  rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:33:54.479  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:33:54.479  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms
00:33:54.479  
00:33:54.479  --- 10.0.0.1 ping statistics ---
00:33:54.479  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:33:54.479  rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3274808
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3274808
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3274808 ']'
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:33:54.479  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:54.479   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:54.479  [2024-12-10 00:15:09.717265] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:33:54.479  [2024-12-10 00:15:09.718217] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:33:54.479  [2024-12-10 00:15:09.718253] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:33:54.480  [2024-12-10 00:15:09.796112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:33:54.480  [2024-12-10 00:15:09.837478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:33:54.480  [2024-12-10 00:15:09.837514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:33:54.480  [2024-12-10 00:15:09.837521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:33:54.480  [2024-12-10 00:15:09.837527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:33:54.480  [2024-12-10 00:15:09.837532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:33:54.480  [2024-12-10 00:15:09.838940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:33:54.480  [2024-12-10 00:15:09.839045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:33:54.480  [2024-12-10 00:15:09.839150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:33:54.480  [2024-12-10 00:15:09.839151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:33:54.480  [2024-12-10 00:15:09.906962] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:33:54.480  [2024-12-10 00:15:09.908189] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:33:54.480  [2024-12-10 00:15:09.908357] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:33:54.480  [2024-12-10 00:15:09.908669] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:33:54.480  [2024-12-10 00:15:09.908712] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:33:54.480   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:54.480   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:33:54.480   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:33:54.480   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:33:54.480   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:54.480   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:33:54.480   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:33:54.480   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:54.480   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:54.480  [2024-12-10 00:15:09.971828] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:33:54.480   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:54.480   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem
00:33:54.480   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable
00:33:54.480   00:15:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:54.480  Malloc0
00:33:54.480  [2024-12-10 00:15:10.067972] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3274861
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3274861 /var/tmp/bdevperf.sock
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3274861 ']'
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10
00:33:54.480    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100
00:33:54.480    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:33:54.480  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:33:54.480    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable
00:33:54.480   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:54.480    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:33:54.480    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:33:54.480  {
00:33:54.480    "params": {
00:33:54.480      "name": "Nvme$subsystem",
00:33:54.480      "trtype": "$TEST_TRANSPORT",
00:33:54.480      "traddr": "$NVMF_FIRST_TARGET_IP",
00:33:54.480      "adrfam": "ipv4",
00:33:54.480      "trsvcid": "$NVMF_PORT",
00:33:54.480      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:33:54.480      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:33:54.480      "hdgst": ${hdgst:-false},
00:33:54.480      "ddgst": ${ddgst:-false}
00:33:54.480    },
00:33:54.480    "method": "bdev_nvme_attach_controller"
00:33:54.480  }
00:33:54.480  EOF
00:33:54.480  )")
00:33:54.480     00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:33:54.480    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:33:54.480     00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:33:54.480     00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:33:54.480    "params": {
00:33:54.480      "name": "Nvme0",
00:33:54.480      "trtype": "tcp",
00:33:54.480      "traddr": "10.0.0.2",
00:33:54.480      "adrfam": "ipv4",
00:33:54.480      "trsvcid": "4420",
00:33:54.480      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:33:54.480      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:33:54.480      "hdgst": false,
00:33:54.480      "ddgst": false
00:33:54.480    },
00:33:54.480    "method": "bdev_nvme_attach_controller"
00:33:54.480  }'
00:33:54.480  [2024-12-10 00:15:10.163979] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:33:54.480  [2024-12-10 00:15:10.164035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3274861 ]
00:33:54.480  [2024-12-10 00:15:10.240233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:54.480  [2024-12-10 00:15:10.279871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:33:54.739  Running I/O for 10 seconds...
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']'
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']'
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 ))
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 ))
00:33:54.739    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:33:54.739    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:33:54.739    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:54.739    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:54.739    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=92
00:33:54.739   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 92 -ge 100 ']'
00:33:54.740   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25
00:33:54.998   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- ))
00:33:54.998   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 ))
00:33:54.998    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1
00:33:54.998    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops'
00:33:54.998    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:54.998    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:54.998    00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:55.259   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707
00:33:55.259   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']'
00:33:55.259   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0
00:33:55.259   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break
00:33:55.259   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0
00:33:55.259   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:33:55.259   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:55.259   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:55.259  [2024-12-10 00:15:10.874160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 
00:33:55.259  [2024-12-10 00:15:10.874207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.260  [2024-12-10 00:15:10.874218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 
00:33:55.260  [2024-12-10 00:15:10.874225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.260  [2024-12-10 00:15:10.874233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 
00:33:55.260  [2024-12-10 00:15:10.874240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.260  [2024-12-10 00:15:10.874247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 
00:33:55.260  [2024-12-10 00:15:10.874253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.260  [2024-12-10 00:15:10.874260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9c7e0 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.260  [2024-12-10 00:15:10.875707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.261  [2024-12-10 00:15:10.875712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.261  [2024-12-10 00:15:10.875718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.261  [2024-12-10 00:15:10.875724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.261  [2024-12-10 00:15:10.875729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.261  [2024-12-10 00:15:10.875735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.261  [2024-12-10 00:15:10.875741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.261  [2024-12-10 00:15:10.875746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d8730 is same with the state(6) to be set
00:33:55.261  [2024-12-10 00:15:10.875808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.875832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.875854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.875861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.875869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.875876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.875884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.875891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.875899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.875906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.875914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.875920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.875928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.875935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.875943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.875950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.875958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.875964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.875972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.875978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.875986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.875992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.261  [2024-12-10 00:15:10.876199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.261  [2024-12-10 00:15:10.876205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:55.262  [2024-12-10 00:15:10.876570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.262  [2024-12-10 00:15:10.876656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.262  [2024-12-10 00:15:10.876663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.263  [2024-12-10 00:15:10.876670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.263  [2024-12-10 00:15:10.876677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.263  [2024-12-10 00:15:10.876685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.263  [2024-12-10 00:15:10.876691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.263  [2024-12-10 00:15:10.876699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.263  [2024-12-10 00:15:10.876706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.263  [2024-12-10 00:15:10.876714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.263  [2024-12-10 00:15:10.876720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.263  [2024-12-10 00:15:10.876728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.263  [2024-12-10 00:15:10.876734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.263  [2024-12-10 00:15:10.876742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.263  [2024-12-10 00:15:10.876749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.263  [2024-12-10 00:15:10.876756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.263  [2024-12-10 00:15:10.876763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.263  [2024-12-10 00:15:10.876771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.263  [2024-12-10 00:15:10.876777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.263  [2024-12-10 00:15:10.876786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0
00:33:55.263  [2024-12-10 00:15:10.876793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.263  [2024-12-10 00:15:10.876800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db5770 is same with the state(6) to be set
00:33:55.263   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0
00:33:55.263   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable
00:33:55.263   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:33:55.263  [2024-12-10 00:15:10.877755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller
00:33:55.263  task offset: 98304 on job bdev=Nvme0n1 fails
00:33:55.263  
00:33:55.263                                                                                                  Latency(us)
00:33:55.263  
[2024-12-09T23:15:11.120Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:55.263  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:33:55.263  Job: Nvme0n1 ended in about 0.40 seconds with error
00:33:55.263  	 Verification LBA range: start 0x0 length 0x400
00:33:55.263  	 Nvme0n1             :       0.40    1913.53     119.60     159.46     0.00   30050.60    3432.84   26713.72
00:33:55.263  
[2024-12-09T23:15:11.120Z]  ===================================================================================================================
00:33:55.263  
[2024-12-09T23:15:11.120Z]  Total                       :               1913.53     119.60     159.46     0.00   30050.60    3432.84   26713.72
00:33:55.263  [2024-12-10 00:15:10.880110] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:33:55.263  [2024-12-10 00:15:10.880129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9c7e0 (9): Bad file descriptor
00:33:55.263  [2024-12-10 00:15:10.881129] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0'
00:33:55.263  [2024-12-10 00:15:10.881211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400
00:33:55.263  [2024-12-10 00:15:10.881233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:33:55.263  [2024-12-10 00:15:10.881249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0
00:33:55.263  [2024-12-10 00:15:10.881256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132
00:33:55.263  [2024-12-10 00:15:10.881263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command
00:33:55.263  [2024-12-10 00:15:10.881269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b9c7e0
00:33:55.263  [2024-12-10 00:15:10.881288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b9c7e0 (9): Bad file descriptor
00:33:55.263  [2024-12-10 00:15:10.881300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state
00:33:55.263  [2024-12-10 00:15:10.881306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed
00:33:55.263  [2024-12-10 00:15:10.881314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state.
00:33:55.263  [2024-12-10 00:15:10.881322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed.
00:33:55.263   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:33:55.263   00:15:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1
00:33:56.200   00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3274861
00:33:56.200  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3274861) - No such process
00:33:56.200   00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true
00:33:56.200   00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004
00:33:56.200   00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1
00:33:56.200    00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0
00:33:56.200    00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=()
00:33:56.200    00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config
00:33:56.200    00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:33:56.200    00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:33:56.200  {
00:33:56.200    "params": {
00:33:56.200      "name": "Nvme$subsystem",
00:33:56.200      "trtype": "$TEST_TRANSPORT",
00:33:56.200      "traddr": "$NVMF_FIRST_TARGET_IP",
00:33:56.200      "adrfam": "ipv4",
00:33:56.200      "trsvcid": "$NVMF_PORT",
00:33:56.201      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:33:56.201      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:33:56.201      "hdgst": ${hdgst:-false},
00:33:56.201      "ddgst": ${ddgst:-false}
00:33:56.201    },
00:33:56.201    "method": "bdev_nvme_attach_controller"
00:33:56.201  }
00:33:56.201  EOF
00:33:56.201  )")
00:33:56.201     00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat
00:33:56.201    00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq .
00:33:56.201     00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=,
00:33:56.201     00:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:33:56.201    "params": {
00:33:56.201      "name": "Nvme0",
00:33:56.201      "trtype": "tcp",
00:33:56.201      "traddr": "10.0.0.2",
00:33:56.201      "adrfam": "ipv4",
00:33:56.201      "trsvcid": "4420",
00:33:56.201      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:33:56.201      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:33:56.201      "hdgst": false,
00:33:56.201      "ddgst": false
00:33:56.201    },
00:33:56.201    "method": "bdev_nvme_attach_controller"
00:33:56.201  }'
00:33:56.201  [2024-12-10 00:15:11.943464] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:33:56.201  [2024-12-10 00:15:11.943515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3275130 ]
00:33:56.201  [2024-12-10 00:15:12.022293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:33:56.459  [2024-12-10 00:15:12.060658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:33:56.718  Running I/O for 1 seconds...
00:33:57.655       1984.00 IOPS,   124.00 MiB/s
00:33:57.655                                                                                                  Latency(us)
00:33:57.655  
[2024-12-09T23:15:13.512Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:33:57.655  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536)
00:33:57.655  	 Verification LBA range: start 0x0 length 0x400
00:33:57.655  	 Nvme0n1             :       1.01    2021.98     126.37       0.00     0.00   31149.09    6772.05   26838.55
00:33:57.655  
[2024-12-09T23:15:13.512Z]  ===================================================================================================================
00:33:57.655  
[2024-12-09T23:15:13.512Z]  Total                       :               2021.98     126.37       0.00     0.00   31149.09    6772.05   26838.55
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20}
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:33:57.915  rmmod nvme_tcp
00:33:57.915  rmmod nvme_fabrics
00:33:57.915  rmmod nvme_keyring
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3274808 ']'
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3274808
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3274808 ']'
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3274808
00:33:57.915    00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:33:57.915    00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3274808
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3274808'
00:33:57.915  killing process with pid 3274808
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3274808
00:33:57.915   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3274808
00:33:58.175  [2024-12-10 00:15:13.882986] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2
00:33:58.175   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:33:58.175   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:33:58.175   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:33:58.175   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr
00:33:58.175   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save
00:33:58.175   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:33:58.175   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore
00:33:58.175   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:33:58.175   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns
00:33:58.175   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:33:58.175   00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:33:58.175    00:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:00.713   00:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:34:00.713   00:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT
00:34:00.713  
00:34:00.713  real	0m12.470s
00:34:00.713  user	0m18.417s
00:34:00.713  sys	0m6.236s
00:34:00.713   00:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable
00:34:00.713   00:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x
00:34:00.713  ************************************
00:34:00.713  END TEST nvmf_host_management
00:34:00.713  ************************************
00:34:00.713   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode
00:34:00.713   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:34:00.713   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:34:00.713   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:34:00.713  ************************************
00:34:00.713  START TEST nvmf_lvol
00:34:00.713  ************************************
00:34:00.713   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode
00:34:00.713  * Looking for test storage...
00:34:00.713  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:34:00.713     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:34:00.713     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-:
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-:
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<'
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 ))
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:34:00.713     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1
00:34:00.713     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1
00:34:00.713     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:34:00.713     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1
00:34:00.713     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2
00:34:00.713     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2
00:34:00.713     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:34:00.713     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0
00:34:00.713    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:34:00.714  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:00.714  		--rc genhtml_branch_coverage=1
00:34:00.714  		--rc genhtml_function_coverage=1
00:34:00.714  		--rc genhtml_legend=1
00:34:00.714  		--rc geninfo_all_blocks=1
00:34:00.714  		--rc geninfo_unexecuted_blocks=1
00:34:00.714  		
00:34:00.714  		'
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:34:00.714  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:00.714  		--rc genhtml_branch_coverage=1
00:34:00.714  		--rc genhtml_function_coverage=1
00:34:00.714  		--rc genhtml_legend=1
00:34:00.714  		--rc geninfo_all_blocks=1
00:34:00.714  		--rc geninfo_unexecuted_blocks=1
00:34:00.714  		
00:34:00.714  		'
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:34:00.714  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:00.714  		--rc genhtml_branch_coverage=1
00:34:00.714  		--rc genhtml_function_coverage=1
00:34:00.714  		--rc genhtml_legend=1
00:34:00.714  		--rc geninfo_all_blocks=1
00:34:00.714  		--rc geninfo_unexecuted_blocks=1
00:34:00.714  		
00:34:00.714  		'
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:34:00.714  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:00.714  		--rc genhtml_branch_coverage=1
00:34:00.714  		--rc genhtml_function_coverage=1
00:34:00.714  		--rc genhtml_legend=1
00:34:00.714  		--rc geninfo_all_blocks=1
00:34:00.714  		--rc geninfo_unexecuted_blocks=1
00:34:00.714  		
00:34:00.714  		'
00:34:00.714   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:34:00.714     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:34:00.714     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:34:00.714     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob
00:34:00.714     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:34:00.714     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:34:00.714     00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:34:00.714      00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:00.714      00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:00.714      00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:00.714      00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH
00:34:00.714      00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:34:00.714    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0
00:34:00.714   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64
00:34:00.714   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:34:00.714   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20
00:34:00.714   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30
00:34:00.714   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:34:00.714   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit
00:34:00.714   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:34:00.714   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:34:00.714   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs
00:34:00.714   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no
00:34:00.714   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns
00:34:00.715   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:00.715   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:00.715    00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:00.715   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:34:00.715   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:34:00.715   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable
00:34:00.715   00:15:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=()
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=()
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=()
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=()
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=()
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=()
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=()
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx
00:34:06.137   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:34:06.138  Found 0000:af:00.0 (0x8086 - 0x159b)
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:34:06.138  Found 0000:af:00.1 (0x8086 - 0x159b)
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:34:06.138  Found net devices under 0000:af:00.0: cvl_0_0
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:34:06.138  Found net devices under 0000:af:00.1: cvl_0_1
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:34:06.138   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:34:06.398   00:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:34:06.398  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:34:06.398  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms
00:34:06.398  
00:34:06.398  --- 10.0.0.2 ping statistics ---
00:34:06.398  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:06.398  rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:34:06.398  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:34:06.398  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms
00:34:06.398  
00:34:06.398  --- 10.0.0.1 ping statistics ---
00:34:06.398  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:06.398  rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3278941
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3278941
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3278941 ']'
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:34:06.398  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable
00:34:06.398   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:34:06.398  [2024-12-10 00:15:22.152475] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:34:06.399  [2024-12-10 00:15:22.153380] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:34:06.399  [2024-12-10 00:15:22.153413] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:34:06.399  [2024-12-10 00:15:22.231463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:34:06.658  [2024-12-10 00:15:22.272178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:34:06.658  [2024-12-10 00:15:22.272209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:34:06.658  [2024-12-10 00:15:22.272217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:34:06.658  [2024-12-10 00:15:22.272222] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:34:06.658  [2024-12-10 00:15:22.272227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:34:06.658  [2024-12-10 00:15:22.273385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:34:06.658  [2024-12-10 00:15:22.273494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:34:06.658  [2024-12-10 00:15:22.273496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:34:06.658  [2024-12-10 00:15:22.340535] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:34:06.658  [2024-12-10 00:15:22.341360] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:34:06.658  [2024-12-10 00:15:22.341516] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:34:06.658  [2024-12-10 00:15:22.341691] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:34:06.658   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:34:06.658   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0
00:34:06.658   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:34:06.658   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable
00:34:06.658   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:34:06.658   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:34:06.658   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:34:06.917  [2024-12-10 00:15:22.570271] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:34:06.917    00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:34:07.176   00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 '
00:34:07.176    00:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:34:07.435   00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1
00:34:07.435   00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1'
00:34:07.435    00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs
00:34:07.694   00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a71dccb3-4614-43af-b6f8-ca118a873ceb
00:34:07.694    00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a71dccb3-4614-43af-b6f8-ca118a873ceb lvol 20
00:34:07.951   00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d6efdf8a-3c27-49c0-af87-bae086bdc475
00:34:07.951   00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:34:07.951   00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d6efdf8a-3c27-49c0-af87-bae086bdc475
00:34:08.209   00:15:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:34:08.468  [2024-12-10 00:15:24.174132] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:34:08.468   00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:34:08.728   00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3279271
00:34:08.728   00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1
00:34:08.728   00:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18
00:34:09.663    00:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d6efdf8a-3c27-49c0-af87-bae086bdc475 MY_SNAPSHOT
00:34:09.922   00:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=59296b4b-ffe7-42f7-819c-e73886e41a3e
00:34:09.922   00:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d6efdf8a-3c27-49c0-af87-bae086bdc475 30
00:34:10.180    00:15:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 59296b4b-ffe7-42f7-819c-e73886e41a3e MY_CLONE
00:34:10.439   00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c4a4885c-fb26-4c14-8b83-2a9f97024241
00:34:10.439   00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c4a4885c-fb26-4c14-8b83-2a9f97024241
00:34:11.006   00:15:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3279271
00:34:19.121  Initializing NVMe Controllers
00:34:19.121  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0
00:34:19.121  Controller IO queue size 128, less than required.
00:34:19.121  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:34:19.121  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3
00:34:19.121  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4
00:34:19.121  Initialization complete. Launching workers.
00:34:19.121  ========================================================
00:34:19.121                                                                                                               Latency(us)
00:34:19.121  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:34:19.121  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  3:   12387.31      48.39   10334.13    2108.24   74462.15
00:34:19.121  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core  4:   12230.52      47.78   10467.96    3093.14   58783.19
00:34:19.121  ========================================================
00:34:19.121  Total                                                                    :   24617.83      96.16   10400.62    2108.24   74462.15
00:34:19.121  
00:34:19.121   00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:34:19.121   00:15:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d6efdf8a-3c27-49c0-af87-bae086bdc475
00:34:19.383   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a71dccb3-4614-43af-b6f8-ca118a873ceb
00:34:19.642   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f
00:34:19.642   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT
00:34:19.642   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini
00:34:19.642   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup
00:34:19.642   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync
00:34:19.642   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:34:19.642   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e
00:34:19.642   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20}
00:34:19.642   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:34:19.642  rmmod nvme_tcp
00:34:19.642  rmmod nvme_fabrics
00:34:19.642  rmmod nvme_keyring
00:34:19.642   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:34:19.642   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e
00:34:19.642   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0
00:34:19.642   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3278941 ']'
00:34:19.643   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3278941
00:34:19.643   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3278941 ']'
00:34:19.643   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3278941
00:34:19.643    00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname
00:34:19.643   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:34:19.643    00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3278941
00:34:19.643   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:34:19.643   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:34:19.643   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3278941'
00:34:19.643  killing process with pid 3278941
00:34:19.643   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3278941
00:34:19.643   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3278941
00:34:19.902   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:34:19.902   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:34:19.902   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:34:19.902   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr
00:34:19.902   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save
00:34:19.902   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:34:19.902   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore
00:34:19.902   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:34:19.902   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns
00:34:19.902   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:19.902   00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:19.902    00:15:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:22.437   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:34:22.437  
00:34:22.437  real	0m21.614s
00:34:22.437  user	0m55.274s
00:34:22.437  sys	0m9.714s
00:34:22.437   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable
00:34:22.437   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x
00:34:22.437  ************************************
00:34:22.437  END TEST nvmf_lvol
00:34:22.437  ************************************
00:34:22.437   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode
00:34:22.437   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:34:22.437   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:34:22.437   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:34:22.437  ************************************
00:34:22.437  START TEST nvmf_lvs_grow
00:34:22.437  ************************************
00:34:22.437   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode
00:34:22.437  * Looking for test storage...
00:34:22.437  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:34:22.437    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:34:22.437     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version
00:34:22.437     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:34:22.437    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:34:22.437    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:34:22.437    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-:
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-:
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<'
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 ))
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:34:22.438  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:22.438  		--rc genhtml_branch_coverage=1
00:34:22.438  		--rc genhtml_function_coverage=1
00:34:22.438  		--rc genhtml_legend=1
00:34:22.438  		--rc geninfo_all_blocks=1
00:34:22.438  		--rc geninfo_unexecuted_blocks=1
00:34:22.438  		
00:34:22.438  		'
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:34:22.438  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:22.438  		--rc genhtml_branch_coverage=1
00:34:22.438  		--rc genhtml_function_coverage=1
00:34:22.438  		--rc genhtml_legend=1
00:34:22.438  		--rc geninfo_all_blocks=1
00:34:22.438  		--rc geninfo_unexecuted_blocks=1
00:34:22.438  		
00:34:22.438  		'
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:34:22.438  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:22.438  		--rc genhtml_branch_coverage=1
00:34:22.438  		--rc genhtml_function_coverage=1
00:34:22.438  		--rc genhtml_legend=1
00:34:22.438  		--rc geninfo_all_blocks=1
00:34:22.438  		--rc geninfo_unexecuted_blocks=1
00:34:22.438  		
00:34:22.438  		'
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:34:22.438  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:34:22.438  		--rc genhtml_branch_coverage=1
00:34:22.438  		--rc genhtml_function_coverage=1
00:34:22.438  		--rc genhtml_legend=1
00:34:22.438  		--rc geninfo_all_blocks=1
00:34:22.438  		--rc geninfo_unexecuted_blocks=1
00:34:22.438  		
00:34:22.438  		'
00:34:22.438   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:34:22.438    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:34:22.438     00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:34:22.438      00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:22.438      00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:22.438      00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:22.438      00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH
00:34:22.439      00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:34:22.439    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0
00:34:22.439    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:34:22.439    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:34:22.439    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:34:22.439    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:34:22.439    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:34:22.439    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:34:22.439    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:34:22.439    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:34:22.439    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:34:22.439    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:34:22.439    00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable
00:34:22.439   00:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=()
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=()
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=()
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=()
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=()
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=()
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=()
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:34:27.713  Found 0000:af:00.0 (0x8086 - 0x159b)
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:34:27.713  Found 0000:af:00.1 (0x8086 - 0x159b)
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:34:27.713   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]]
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:34:27.714  Found net devices under 0000:af:00.0: cvl_0_0
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]]
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:34:27.714  Found net devices under 0000:af:00.1: cvl_0_1
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:34:27.714   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:34:27.973   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:34:27.973   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:34:27.973   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:34:27.974  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:34:27.974  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms
00:34:27.974  
00:34:27.974  --- 10.0.0.2 ping statistics ---
00:34:27.974  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:27.974  rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:34:27.974  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:34:27.974  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms
00:34:27.974  
00:34:27.974  --- 10.0.0.1 ping statistics ---
00:34:27.974  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:34:27.974  rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:34:27.974   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3284516
00:34:28.233   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1
00:34:28.233   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3284516
00:34:28.233   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3284516 ']'
00:34:28.233   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:34:28.233   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100
00:34:28.233   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:34:28.234  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:34:28.234   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable
00:34:28.234   00:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:34:28.234  [2024-12-10 00:15:43.881354] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:34:28.234  [2024-12-10 00:15:43.882197] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:34:28.234  [2024-12-10 00:15:43.882226] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:34:28.234  [2024-12-10 00:15:43.960373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:34:28.234  [2024-12-10 00:15:44.000540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:34:28.234  [2024-12-10 00:15:44.000566] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:34:28.234  [2024-12-10 00:15:44.000574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:34:28.234  [2024-12-10 00:15:44.000580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:34:28.234  [2024-12-10 00:15:44.000585] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:34:28.234  [2024-12-10 00:15:44.000950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:34:28.234  [2024-12-10 00:15:44.067127] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:34:28.234  [2024-12-10 00:15:44.067327] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:34:29.169  [2024-12-10 00:15:44.925674] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:34:29.169  ************************************
00:34:29.169  START TEST lvs_grow_clean
00:34:29.169  ************************************
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:34:29.169   00:15:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:34:29.169   00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:34:29.169    00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:34:29.426   00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:34:29.426    00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:34:29.683   00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7eda8c26-9e8c-4543-8105-842d534733ab
00:34:29.683    00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eda8c26-9e8c-4543-8105-842d534733ab
00:34:29.683    00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:34:29.942   00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:34:29.942   00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:34:29.942    00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7eda8c26-9e8c-4543-8105-842d534733ab lvol 150
00:34:30.201   00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=595e3e0a-e078-405d-bde5-3ee7a4c4a57c
00:34:30.201   00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:34:30.201   00:15:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:34:30.201  [2024-12-10 00:15:45.985338] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:34:30.201  [2024-12-10 00:15:45.985466] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:34:30.201  true
00:34:30.201    00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eda8c26-9e8c-4543-8105-842d534733ab
00:34:30.201    00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:34:30.459   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:34:30.459   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:34:30.718   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 595e3e0a-e078-405d-bde5-3ee7a4c4a57c
00:34:30.718   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:34:30.977  [2024-12-10 00:15:46.721787] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:34:30.977   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:34:31.235   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:34:31.235   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3285007
00:34:31.235   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:34:31.235   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3285007 /var/tmp/bdevperf.sock
00:34:31.235   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3285007 ']'
00:34:31.235   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:34:31.235   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100
00:34:31.235   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:34:31.235  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:34:31.235   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable
00:34:31.235   00:15:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:34:31.235  [2024-12-10 00:15:46.955207] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:34:31.235  [2024-12-10 00:15:46.955253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3285007 ]
00:34:31.235  [2024-12-10 00:15:47.027454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:34:31.235  [2024-12-10 00:15:47.068649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:34:31.493   00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:34:31.493   00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0
00:34:31.493   00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:34:31.752  Nvme0n1
00:34:31.752   00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:34:32.010  [
00:34:32.010    {
00:34:32.010      "name": "Nvme0n1",
00:34:32.010      "aliases": [
00:34:32.010        "595e3e0a-e078-405d-bde5-3ee7a4c4a57c"
00:34:32.010      ],
00:34:32.010      "product_name": "NVMe disk",
00:34:32.010      "block_size": 4096,
00:34:32.010      "num_blocks": 38912,
00:34:32.010      "uuid": "595e3e0a-e078-405d-bde5-3ee7a4c4a57c",
00:34:32.010      "numa_id": 1,
00:34:32.010      "assigned_rate_limits": {
00:34:32.010        "rw_ios_per_sec": 0,
00:34:32.010        "rw_mbytes_per_sec": 0,
00:34:32.010        "r_mbytes_per_sec": 0,
00:34:32.010        "w_mbytes_per_sec": 0
00:34:32.010      },
00:34:32.010      "claimed": false,
00:34:32.010      "zoned": false,
00:34:32.010      "supported_io_types": {
00:34:32.010        "read": true,
00:34:32.010        "write": true,
00:34:32.010        "unmap": true,
00:34:32.010        "flush": true,
00:34:32.010        "reset": true,
00:34:32.010        "nvme_admin": true,
00:34:32.010        "nvme_io": true,
00:34:32.010        "nvme_io_md": false,
00:34:32.010        "write_zeroes": true,
00:34:32.010        "zcopy": false,
00:34:32.010        "get_zone_info": false,
00:34:32.010        "zone_management": false,
00:34:32.010        "zone_append": false,
00:34:32.010        "compare": true,
00:34:32.010        "compare_and_write": true,
00:34:32.010        "abort": true,
00:34:32.010        "seek_hole": false,
00:34:32.010        "seek_data": false,
00:34:32.010        "copy": true,
00:34:32.010        "nvme_iov_md": false
00:34:32.010      },
00:34:32.010      "memory_domains": [
00:34:32.010        {
00:34:32.010          "dma_device_id": "system",
00:34:32.010          "dma_device_type": 1
00:34:32.010        }
00:34:32.010      ],
00:34:32.010      "driver_specific": {
00:34:32.010        "nvme": [
00:34:32.010          {
00:34:32.010            "trid": {
00:34:32.010              "trtype": "TCP",
00:34:32.010              "adrfam": "IPv4",
00:34:32.010              "traddr": "10.0.0.2",
00:34:32.010              "trsvcid": "4420",
00:34:32.010              "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:34:32.010            },
00:34:32.010            "ctrlr_data": {
00:34:32.010              "cntlid": 1,
00:34:32.010              "vendor_id": "0x8086",
00:34:32.010              "model_number": "SPDK bdev Controller",
00:34:32.010              "serial_number": "SPDK0",
00:34:32.010              "firmware_revision": "25.01",
00:34:32.011              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:34:32.011              "oacs": {
00:34:32.011                "security": 0,
00:34:32.011                "format": 0,
00:34:32.011                "firmware": 0,
00:34:32.011                "ns_manage": 0
00:34:32.011              },
00:34:32.011              "multi_ctrlr": true,
00:34:32.011              "ana_reporting": false
00:34:32.011            },
00:34:32.011            "vs": {
00:34:32.011              "nvme_version": "1.3"
00:34:32.011            },
00:34:32.011            "ns_data": {
00:34:32.011              "id": 1,
00:34:32.011              "can_share": true
00:34:32.011            }
00:34:32.011          }
00:34:32.011        ],
00:34:32.011        "mp_policy": "active_passive"
00:34:32.011      }
00:34:32.011    }
00:34:32.011  ]
00:34:32.011   00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3285229
00:34:32.011   00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:34:32.011   00:15:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:34:32.011  Running I/O for 10 seconds...
00:34:33.388                                                                                                  Latency(us)
00:34:33.388  
[2024-12-09T23:15:49.245Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:34:33.388  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:33.388  	 Nvme0n1             :       1.00   23241.00      90.79       0.00     0.00       0.00       0.00       0.00
00:34:33.388  
[2024-12-09T23:15:49.245Z]  ===================================================================================================================
00:34:33.388  
[2024-12-09T23:15:49.245Z]  Total                       :              23241.00      90.79       0.00     0.00       0.00       0.00       0.00
00:34:33.388  
00:34:33.955   00:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7eda8c26-9e8c-4543-8105-842d534733ab
00:34:34.214  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:34.214  	 Nvme0n1             :       2.00   23495.00      91.78       0.00     0.00       0.00       0.00       0.00
00:34:34.214  
[2024-12-09T23:15:50.071Z]  ===================================================================================================================
00:34:34.214  
[2024-12-09T23:15:50.071Z]  Total                       :              23495.00      91.78       0.00     0.00       0.00       0.00       0.00
00:34:34.214  
00:34:34.214  true
00:34:34.214    00:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eda8c26-9e8c-4543-8105-842d534733ab
00:34:34.214    00:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:34:34.472   00:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:34:34.472   00:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:34:34.472   00:15:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3285229
00:34:35.040  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:35.040  	 Nvme0n1             :       3.00   23410.33      91.45       0.00     0.00       0.00       0.00       0.00
00:34:35.040  
[2024-12-09T23:15:50.897Z]  ===================================================================================================================
00:34:35.040  
[2024-12-09T23:15:50.897Z]  Total                       :              23410.33      91.45       0.00     0.00       0.00       0.00       0.00
00:34:35.040  
00:34:36.417  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:36.417  	 Nvme0n1             :       4.00   23558.50      92.03       0.00     0.00       0.00       0.00       0.00
00:34:36.417  
[2024-12-09T23:15:52.274Z]  ===================================================================================================================
00:34:36.417  
[2024-12-09T23:15:52.274Z]  Total                       :              23558.50      92.03       0.00     0.00       0.00       0.00       0.00
00:34:36.417  
00:34:37.353  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:37.353  	 Nvme0n1             :       5.00   23647.40      92.37       0.00     0.00       0.00       0.00       0.00
00:34:37.353  
[2024-12-09T23:15:53.210Z]  ===================================================================================================================
00:34:37.353  
[2024-12-09T23:15:53.210Z]  Total                       :              23647.40      92.37       0.00     0.00       0.00       0.00       0.00
00:34:37.353  
00:34:38.290  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:38.290  	 Nvme0n1             :       6.00   23727.83      92.69       0.00     0.00       0.00       0.00       0.00
00:34:38.290  
[2024-12-09T23:15:54.147Z]  ===================================================================================================================
00:34:38.290  
[2024-12-09T23:15:54.147Z]  Total                       :              23727.83      92.69       0.00     0.00       0.00       0.00       0.00
00:34:38.290  
00:34:39.230  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:39.230  	 Nvme0n1             :       7.00   23767.14      92.84       0.00     0.00       0.00       0.00       0.00
00:34:39.230  
[2024-12-09T23:15:55.087Z]  ===================================================================================================================
00:34:39.230  
[2024-12-09T23:15:55.087Z]  Total                       :              23767.14      92.84       0.00     0.00       0.00       0.00       0.00
00:34:39.230  
00:34:40.169  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:40.169  	 Nvme0n1             :       8.00   23812.50      93.02       0.00     0.00       0.00       0.00       0.00
00:34:40.169  
[2024-12-09T23:15:56.026Z]  ===================================================================================================================
00:34:40.169  
[2024-12-09T23:15:56.026Z]  Total                       :              23812.50      93.02       0.00     0.00       0.00       0.00       0.00
00:34:40.169  
00:34:41.107  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:41.107  	 Nvme0n1             :       9.00   23833.67      93.10       0.00     0.00       0.00       0.00       0.00
00:34:41.107  
[2024-12-09T23:15:56.964Z]  ===================================================================================================================
00:34:41.107  
[2024-12-09T23:15:56.964Z]  Total                       :              23833.67      93.10       0.00     0.00       0.00       0.00       0.00
00:34:41.107  
00:34:42.042  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:42.042  	 Nvme0n1             :      10.00   23858.70      93.20       0.00     0.00       0.00       0.00       0.00
00:34:42.042  
[2024-12-09T23:15:57.899Z]  ===================================================================================================================
00:34:42.042  
[2024-12-09T23:15:57.899Z]  Total                       :              23858.70      93.20       0.00     0.00       0.00       0.00       0.00
00:34:42.042  
00:34:42.042  
00:34:42.042                                                                                                  Latency(us)
00:34:42.042  
[2024-12-09T23:15:57.899Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:34:42.042  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:42.042  	 Nvme0n1             :      10.00   23855.95      93.19       0.00     0.00    5362.20    4930.80   26713.72
00:34:42.042  
[2024-12-09T23:15:57.899Z]  ===================================================================================================================
00:34:42.042  
[2024-12-09T23:15:57.899Z]  Total                       :              23855.95      93.19       0.00     0.00    5362.20    4930.80   26713.72
00:34:42.042  {
00:34:42.042    "results": [
00:34:42.042      {
00:34:42.042        "job": "Nvme0n1",
00:34:42.042        "core_mask": "0x2",
00:34:42.042        "workload": "randwrite",
00:34:42.042        "status": "finished",
00:34:42.042        "queue_depth": 128,
00:34:42.042        "io_size": 4096,
00:34:42.042        "runtime": 10.003125,
00:34:42.042        "iops": 23855.94501718213,
00:34:42.042        "mibps": 93.1872852233677,
00:34:42.042        "io_failed": 0,
00:34:42.042        "io_timeout": 0,
00:34:42.042        "avg_latency_us": 5362.203953182738,
00:34:42.042        "min_latency_us": 4930.80380952381,
00:34:42.042        "max_latency_us": 26713.721904761904
00:34:42.042      }
00:34:42.042    ],
00:34:42.042    "core_count": 1
00:34:42.042  }
00:34:42.300   00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3285007
00:34:42.300   00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3285007 ']'
00:34:42.300   00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3285007
00:34:42.300    00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname
00:34:42.300   00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:34:42.300    00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3285007
00:34:42.300   00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:34:42.300   00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:34:42.300   00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3285007'
00:34:42.300  killing process with pid 3285007
00:34:42.300   00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3285007
00:34:42.300  Received shutdown signal, test time was about 10.000000 seconds
00:34:42.300  
00:34:42.300                                                                                                  Latency(us)
00:34:42.300  
[2024-12-09T23:15:58.157Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:34:42.300  
[2024-12-09T23:15:58.157Z]  ===================================================================================================================
00:34:42.300  
[2024-12-09T23:15:58.157Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:34:42.300   00:15:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3285007
00:34:42.300   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:34:42.557   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:34:42.816    00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eda8c26-9e8c-4543-8105-842d534733ab
00:34:42.816    00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:34:43.074   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:34:43.074   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]]
00:34:43.074   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:34:43.074  [2024-12-10 00:15:58.853394] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:34:43.074   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eda8c26-9e8c-4543-8105-842d534733ab
00:34:43.074   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0
00:34:43.074   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eda8c26-9e8c-4543-8105-842d534733ab
00:34:43.074   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:34:43.074   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:34:43.074    00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:34:43.074   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:34:43.074    00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:34:43.074   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:34:43.074   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:34:43.074   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:34:43.074   00:15:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eda8c26-9e8c-4543-8105-842d534733ab
00:34:43.333  request:
00:34:43.333  {
00:34:43.333    "uuid": "7eda8c26-9e8c-4543-8105-842d534733ab",
00:34:43.333    "method": "bdev_lvol_get_lvstores",
00:34:43.333    "req_id": 1
00:34:43.333  }
00:34:43.333  Got JSON-RPC error response
00:34:43.333  response:
00:34:43.333  {
00:34:43.333    "code": -19,
00:34:43.333    "message": "No such device"
00:34:43.333  }
00:34:43.333   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1
00:34:43.333   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:34:43.333   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:34:43.333   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:34:43.333   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:34:43.592  aio_bdev
00:34:43.592   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 595e3e0a-e078-405d-bde5-3ee7a4c4a57c
00:34:43.592   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=595e3e0a-e078-405d-bde5-3ee7a4c4a57c
00:34:43.592   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:34:43.592   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i
00:34:43.592   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:34:43.592   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:34:43.592   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:34:43.850   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 595e3e0a-e078-405d-bde5-3ee7a4c4a57c -t 2000
00:34:43.851  [
00:34:43.851    {
00:34:43.851      "name": "595e3e0a-e078-405d-bde5-3ee7a4c4a57c",
00:34:43.851      "aliases": [
00:34:43.851        "lvs/lvol"
00:34:43.851      ],
00:34:43.851      "product_name": "Logical Volume",
00:34:43.851      "block_size": 4096,
00:34:43.851      "num_blocks": 38912,
00:34:43.851      "uuid": "595e3e0a-e078-405d-bde5-3ee7a4c4a57c",
00:34:43.851      "assigned_rate_limits": {
00:34:43.851        "rw_ios_per_sec": 0,
00:34:43.851        "rw_mbytes_per_sec": 0,
00:34:43.851        "r_mbytes_per_sec": 0,
00:34:43.851        "w_mbytes_per_sec": 0
00:34:43.851      },
00:34:43.851      "claimed": false,
00:34:43.851      "zoned": false,
00:34:43.851      "supported_io_types": {
00:34:43.851        "read": true,
00:34:43.851        "write": true,
00:34:43.851        "unmap": true,
00:34:43.851        "flush": false,
00:34:43.851        "reset": true,
00:34:43.851        "nvme_admin": false,
00:34:43.851        "nvme_io": false,
00:34:43.851        "nvme_io_md": false,
00:34:43.851        "write_zeroes": true,
00:34:43.851        "zcopy": false,
00:34:43.851        "get_zone_info": false,
00:34:43.851        "zone_management": false,
00:34:43.851        "zone_append": false,
00:34:43.851        "compare": false,
00:34:43.851        "compare_and_write": false,
00:34:43.851        "abort": false,
00:34:43.851        "seek_hole": true,
00:34:43.851        "seek_data": true,
00:34:43.851        "copy": false,
00:34:43.851        "nvme_iov_md": false
00:34:43.851      },
00:34:43.851      "driver_specific": {
00:34:43.851        "lvol": {
00:34:43.851          "lvol_store_uuid": "7eda8c26-9e8c-4543-8105-842d534733ab",
00:34:43.851          "base_bdev": "aio_bdev",
00:34:43.851          "thin_provision": false,
00:34:43.851          "num_allocated_clusters": 38,
00:34:43.851          "snapshot": false,
00:34:43.851          "clone": false,
00:34:43.851          "esnap_clone": false
00:34:43.851        }
00:34:43.851      }
00:34:43.851    }
00:34:43.851  ]
00:34:43.851   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0
00:34:43.851    00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eda8c26-9e8c-4543-8105-842d534733ab
00:34:43.851    00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:34:44.109   00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:34:44.109    00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7eda8c26-9e8c-4543-8105-842d534733ab
00:34:44.109    00:15:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:34:44.367   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:34:44.368   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 595e3e0a-e078-405d-bde5-3ee7a4c4a57c
00:34:44.633   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7eda8c26-9e8c-4543-8105-842d534733ab
00:34:44.633   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:34:44.894   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:34:44.894  
00:34:44.894  real	0m15.677s
00:34:44.894  user	0m15.248s
00:34:44.894  sys	0m1.436s
00:34:44.894   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable
00:34:44.894   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x
00:34:44.894  ************************************
00:34:44.894  END TEST lvs_grow_clean
00:34:44.894  ************************************
00:34:44.895   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty
00:34:44.895   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:34:44.895   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable
00:34:44.895   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:34:44.895  ************************************
00:34:44.895  START TEST lvs_grow_dirty
00:34:44.895  ************************************
00:34:44.895   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty
00:34:44.895   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol
00:34:44.895   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters
00:34:44.895   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid
00:34:44.895   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200
00:34:44.895   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400
00:34:44.895   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150
00:34:44.895   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:34:44.895   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:34:45.153    00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:34:45.153   00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev
00:34:45.153    00:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs
00:34:45.411   00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4b0e0943-2761-486b-bf3c-e10552ab74fc
00:34:45.411    00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b0e0943-2761-486b-bf3c-e10552ab74fc
00:34:45.411    00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters'
00:34:45.669   00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49
00:34:45.669   00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 ))
00:34:45.669    00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4b0e0943-2761-486b-bf3c-e10552ab74fc lvol 150
00:34:45.928   00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=879ae3db-ad04-41e5-be39-e6187762489f
00:34:45.928   00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:34:45.928   00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev
00:34:45.928  [2024-12-10 00:16:01.745337] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400
00:34:45.928  [2024-12-10 00:16:01.745463] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1
00:34:45.928  true
00:34:45.928    00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b0e0943-2761-486b-bf3c-e10552ab74fc
00:34:45.928    00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters'
00:34:46.186   00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 ))
00:34:46.186   00:16:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0
00:34:46.445   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 879ae3db-ad04-41e5-be39-e6187762489f
00:34:46.703   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:34:46.703  [2024-12-10 00:16:02.481749] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:34:46.703   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:34:46.962   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3287522
00:34:46.962   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:34:46.962   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z
00:34:46.962   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3287522 /var/tmp/bdevperf.sock
00:34:46.962   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3287522 ']'
00:34:46.962   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:34:46.962   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:34:46.962   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:34:46.962  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:34:46.962   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:34:46.962   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:34:46.962  [2024-12-10 00:16:02.739136] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:34:46.962  [2024-12-10 00:16:02.739189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287522 ]
00:34:46.962  [2024-12-10 00:16:02.811347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:34:47.221  [2024-12-10 00:16:02.852314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:34:47.221   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:34:47.221   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:34:47.221   00:16:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0
00:34:47.480  Nvme0n1
00:34:47.480   00:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000
00:34:47.738  [
00:34:47.738    {
00:34:47.738      "name": "Nvme0n1",
00:34:47.738      "aliases": [
00:34:47.738        "879ae3db-ad04-41e5-be39-e6187762489f"
00:34:47.738      ],
00:34:47.738      "product_name": "NVMe disk",
00:34:47.738      "block_size": 4096,
00:34:47.738      "num_blocks": 38912,
00:34:47.738      "uuid": "879ae3db-ad04-41e5-be39-e6187762489f",
00:34:47.738      "numa_id": 1,
00:34:47.738      "assigned_rate_limits": {
00:34:47.738        "rw_ios_per_sec": 0,
00:34:47.738        "rw_mbytes_per_sec": 0,
00:34:47.738        "r_mbytes_per_sec": 0,
00:34:47.738        "w_mbytes_per_sec": 0
00:34:47.738      },
00:34:47.738      "claimed": false,
00:34:47.738      "zoned": false,
00:34:47.738      "supported_io_types": {
00:34:47.738        "read": true,
00:34:47.738        "write": true,
00:34:47.738        "unmap": true,
00:34:47.738        "flush": true,
00:34:47.738        "reset": true,
00:34:47.738        "nvme_admin": true,
00:34:47.738        "nvme_io": true,
00:34:47.738        "nvme_io_md": false,
00:34:47.738        "write_zeroes": true,
00:34:47.738        "zcopy": false,
00:34:47.738        "get_zone_info": false,
00:34:47.738        "zone_management": false,
00:34:47.738        "zone_append": false,
00:34:47.738        "compare": true,
00:34:47.738        "compare_and_write": true,
00:34:47.738        "abort": true,
00:34:47.738        "seek_hole": false,
00:34:47.738        "seek_data": false,
00:34:47.738        "copy": true,
00:34:47.739        "nvme_iov_md": false
00:34:47.739      },
00:34:47.739      "memory_domains": [
00:34:47.739        {
00:34:47.739          "dma_device_id": "system",
00:34:47.739          "dma_device_type": 1
00:34:47.739        }
00:34:47.739      ],
00:34:47.739      "driver_specific": {
00:34:47.739        "nvme": [
00:34:47.739          {
00:34:47.739            "trid": {
00:34:47.739              "trtype": "TCP",
00:34:47.739              "adrfam": "IPv4",
00:34:47.739              "traddr": "10.0.0.2",
00:34:47.739              "trsvcid": "4420",
00:34:47.739              "subnqn": "nqn.2016-06.io.spdk:cnode0"
00:34:47.739            },
00:34:47.739            "ctrlr_data": {
00:34:47.739              "cntlid": 1,
00:34:47.739              "vendor_id": "0x8086",
00:34:47.739              "model_number": "SPDK bdev Controller",
00:34:47.739              "serial_number": "SPDK0",
00:34:47.739              "firmware_revision": "25.01",
00:34:47.739              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:34:47.739              "oacs": {
00:34:47.739                "security": 0,
00:34:47.739                "format": 0,
00:34:47.739                "firmware": 0,
00:34:47.739                "ns_manage": 0
00:34:47.739              },
00:34:47.739              "multi_ctrlr": true,
00:34:47.739              "ana_reporting": false
00:34:47.739            },
00:34:47.739            "vs": {
00:34:47.739              "nvme_version": "1.3"
00:34:47.739            },
00:34:47.739            "ns_data": {
00:34:47.739              "id": 1,
00:34:47.739              "can_share": true
00:34:47.739            }
00:34:47.739          }
00:34:47.739        ],
00:34:47.739        "mp_policy": "active_passive"
00:34:47.739      }
00:34:47.739    }
00:34:47.739  ]
00:34:47.739   00:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3287739
00:34:47.739   00:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:34:47.739   00:16:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2
00:34:47.739  Running I/O for 10 seconds...
00:34:49.116                                                                                                  Latency(us)
00:34:49.116  
[2024-12-09T23:16:04.973Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:34:49.116  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:49.116  	 Nvme0n1             :       1.00   23114.00      90.29       0.00     0.00       0.00       0.00       0.00
00:34:49.116  
[2024-12-09T23:16:04.973Z]  ===================================================================================================================
00:34:49.116  
[2024-12-09T23:16:04.973Z]  Total                       :              23114.00      90.29       0.00     0.00       0.00       0.00       0.00
00:34:49.116  
00:34:49.692   00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4b0e0943-2761-486b-bf3c-e10552ab74fc
00:34:49.692  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:49.692  	 Nvme0n1             :       2.00   23400.00      91.41       0.00     0.00       0.00       0.00       0.00
00:34:49.692  
[2024-12-09T23:16:05.549Z]  ===================================================================================================================
00:34:49.692  
[2024-12-09T23:16:05.549Z]  Total                       :              23400.00      91.41       0.00     0.00       0.00       0.00       0.00
00:34:49.692  
00:34:49.950  true
00:34:49.950    00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b0e0943-2761-486b-bf3c-e10552ab74fc
00:34:49.950    00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters'
00:34:50.209   00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99
00:34:50.209   00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 ))
00:34:50.209   00:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3287739
00:34:50.777  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:50.777  	 Nvme0n1             :       3.00   23490.33      91.76       0.00     0.00       0.00       0.00       0.00
00:34:50.777  
[2024-12-09T23:16:06.634Z]  ===================================================================================================================
00:34:50.777  
[2024-12-09T23:16:06.634Z]  Total                       :              23490.33      91.76       0.00     0.00       0.00       0.00       0.00
00:34:50.777  
00:34:51.717  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:51.717  	 Nvme0n1             :       4.00   23599.00      92.18       0.00     0.00       0.00       0.00       0.00
00:34:51.717  
[2024-12-09T23:16:07.574Z]  ===================================================================================================================
00:34:51.717  
[2024-12-09T23:16:07.574Z]  Total                       :              23599.00      92.18       0.00     0.00       0.00       0.00       0.00
00:34:51.717  
00:34:53.094  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:53.094  	 Nvme0n1             :       5.00   23673.80      92.48       0.00     0.00       0.00       0.00       0.00
00:34:53.094  
[2024-12-09T23:16:08.951Z]  ===================================================================================================================
00:34:53.094  
[2024-12-09T23:16:08.951Z]  Total                       :              23673.80      92.48       0.00     0.00       0.00       0.00       0.00
00:34:53.094  
00:34:54.031  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:54.031  	 Nvme0n1             :       6.00   23728.67      92.69       0.00     0.00       0.00       0.00       0.00
00:34:54.031  
[2024-12-09T23:16:09.888Z]  ===================================================================================================================
00:34:54.031  
[2024-12-09T23:16:09.888Z]  Total                       :              23728.67      92.69       0.00     0.00       0.00       0.00       0.00
00:34:54.031  
00:34:54.968  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:54.968  	 Nvme0n1             :       7.00   23722.57      92.67       0.00     0.00       0.00       0.00       0.00
00:34:54.968  
[2024-12-09T23:16:10.825Z]  ===================================================================================================================
00:34:54.968  
[2024-12-09T23:16:10.825Z]  Total                       :              23722.57      92.67       0.00     0.00       0.00       0.00       0.00
00:34:54.968  
00:34:56.045  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:56.045  	 Nvme0n1             :       8.00   23755.75      92.80       0.00     0.00       0.00       0.00       0.00
00:34:56.045  
[2024-12-09T23:16:11.902Z]  ===================================================================================================================
00:34:56.045  
[2024-12-09T23:16:11.902Z]  Total                       :              23755.75      92.80       0.00     0.00       0.00       0.00       0.00
00:34:56.045  
00:34:56.980  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:56.980  	 Nvme0n1             :       9.00   23783.22      92.90       0.00     0.00       0.00       0.00       0.00
00:34:56.980  
[2024-12-09T23:16:12.837Z]  ===================================================================================================================
00:34:56.980  
[2024-12-09T23:16:12.837Z]  Total                       :              23783.22      92.90       0.00     0.00       0.00       0.00       0.00
00:34:56.980  
00:34:57.916  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:57.916  	 Nvme0n1             :      10.00   23805.20      92.99       0.00     0.00       0.00       0.00       0.00
00:34:57.916  
[2024-12-09T23:16:13.773Z]  ===================================================================================================================
00:34:57.916  
[2024-12-09T23:16:13.773Z]  Total                       :              23805.20      92.99       0.00     0.00       0.00       0.00       0.00
00:34:57.916  
00:34:57.916  
00:34:57.916                                                                                                  Latency(us)
00:34:57.916  
[2024-12-09T23:16:13.773Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:34:57.916  Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096)
00:34:57.916  	 Nvme0n1             :      10.00   23806.80      93.00       0.00     0.00    5373.63    3151.97   27462.70
00:34:57.916  
[2024-12-09T23:16:13.773Z]  ===================================================================================================================
00:34:57.916  
[2024-12-09T23:16:13.773Z]  Total                       :              23806.80      93.00       0.00     0.00    5373.63    3151.97   27462.70
00:34:57.916  {
00:34:57.916    "results": [
00:34:57.916      {
00:34:57.916        "job": "Nvme0n1",
00:34:57.916        "core_mask": "0x2",
00:34:57.916        "workload": "randwrite",
00:34:57.916        "status": "finished",
00:34:57.916        "queue_depth": 128,
00:34:57.916        "io_size": 4096,
00:34:57.916        "runtime": 10.004703,
00:34:57.916        "iops": 23806.80366023859,
00:34:57.916        "mibps": 92.995326797807,
00:34:57.916        "io_failed": 0,
00:34:57.916        "io_timeout": 0,
00:34:57.916        "avg_latency_us": 5373.6320688394935,
00:34:57.916        "min_latency_us": 3151.9695238095237,
00:34:57.916        "max_latency_us": 27462.704761904763
00:34:57.916      }
00:34:57.916    ],
00:34:57.916    "core_count": 1
00:34:57.916  }
00:34:57.916   00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3287522
00:34:57.916   00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3287522 ']'
00:34:57.916   00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3287522
00:34:57.916    00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname
00:34:57.916   00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:34:57.916    00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3287522
00:34:57.916   00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:34:57.916   00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:34:57.916   00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3287522'
00:34:57.916  killing process with pid 3287522
00:34:57.916   00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3287522
00:34:57.916  Received shutdown signal, test time was about 10.000000 seconds
00:34:57.916  
00:34:57.916                                                                                                  Latency(us)
00:34:57.916  
[2024-12-09T23:16:13.773Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:34:57.916  
[2024-12-09T23:16:13.773Z]  ===================================================================================================================
00:34:57.916  
[2024-12-09T23:16:13.773Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:34:57.916   00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3287522
00:34:58.181   00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:34:58.181   00:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:34:58.444    00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b0e0943-2761-486b-bf3c-e10552ab74fc
00:34:58.444    00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters'
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]]
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3284516
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3284516
00:34:58.704  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3284516 Killed                  "${NVMF_APP[@]}" "$@"
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3289425
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3289425
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3289425 ']'
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:34:58.704  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable
00:34:58.704   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:34:58.704  [2024-12-10 00:16:14.456917] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:34:58.704  [2024-12-10 00:16:14.457824] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:34:58.704  [2024-12-10 00:16:14.457861] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:34:58.704  [2024-12-10 00:16:14.537413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:34:58.967  [2024-12-10 00:16:14.576935] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:34:58.967  [2024-12-10 00:16:14.576968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:34:58.967  [2024-12-10 00:16:14.576974] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:34:58.967  [2024-12-10 00:16:14.576980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:34:58.967  [2024-12-10 00:16:14.576985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:34:58.967  [2024-12-10 00:16:14.577478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:34:58.967  [2024-12-10 00:16:14.644267] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:34:58.967  [2024-12-10 00:16:14.644466] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:34:58.967   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:34:58.967   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0
00:34:58.967   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:34:58.967   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable
00:34:58.967   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:34:58.967   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:34:58.967    00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:34:59.227  [2024-12-10 00:16:14.878832] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore
00:34:59.227  [2024-12-10 00:16:14.879026] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0
00:34:59.227  [2024-12-10 00:16:14.879111] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1
00:34:59.227   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev
00:34:59.227   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 879ae3db-ad04-41e5-be39-e6187762489f
00:34:59.227   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=879ae3db-ad04-41e5-be39-e6187762489f
00:34:59.227   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:34:59.227   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:34:59.227   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:34:59.227   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:34:59.227   00:16:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:34:59.486   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 879ae3db-ad04-41e5-be39-e6187762489f -t 2000
00:34:59.486  [
00:34:59.486    {
00:34:59.486      "name": "879ae3db-ad04-41e5-be39-e6187762489f",
00:34:59.486      "aliases": [
00:34:59.486        "lvs/lvol"
00:34:59.486      ],
00:34:59.486      "product_name": "Logical Volume",
00:34:59.486      "block_size": 4096,
00:34:59.486      "num_blocks": 38912,
00:34:59.486      "uuid": "879ae3db-ad04-41e5-be39-e6187762489f",
00:34:59.486      "assigned_rate_limits": {
00:34:59.486        "rw_ios_per_sec": 0,
00:34:59.486        "rw_mbytes_per_sec": 0,
00:34:59.486        "r_mbytes_per_sec": 0,
00:34:59.486        "w_mbytes_per_sec": 0
00:34:59.486      },
00:34:59.486      "claimed": false,
00:34:59.486      "zoned": false,
00:34:59.486      "supported_io_types": {
00:34:59.486        "read": true,
00:34:59.486        "write": true,
00:34:59.486        "unmap": true,
00:34:59.486        "flush": false,
00:34:59.486        "reset": true,
00:34:59.486        "nvme_admin": false,
00:34:59.486        "nvme_io": false,
00:34:59.486        "nvme_io_md": false,
00:34:59.486        "write_zeroes": true,
00:34:59.486        "zcopy": false,
00:34:59.486        "get_zone_info": false,
00:34:59.486        "zone_management": false,
00:34:59.486        "zone_append": false,
00:34:59.486        "compare": false,
00:34:59.486        "compare_and_write": false,
00:34:59.486        "abort": false,
00:34:59.486        "seek_hole": true,
00:34:59.486        "seek_data": true,
00:34:59.486        "copy": false,
00:34:59.486        "nvme_iov_md": false
00:34:59.486      },
00:34:59.486      "driver_specific": {
00:34:59.486        "lvol": {
00:34:59.486          "lvol_store_uuid": "4b0e0943-2761-486b-bf3c-e10552ab74fc",
00:34:59.486          "base_bdev": "aio_bdev",
00:34:59.486          "thin_provision": false,
00:34:59.486          "num_allocated_clusters": 38,
00:34:59.486          "snapshot": false,
00:34:59.486          "clone": false,
00:34:59.486          "esnap_clone": false
00:34:59.486        }
00:34:59.486      }
00:34:59.486    }
00:34:59.486  ]
00:34:59.486   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:34:59.486    00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b0e0943-2761-486b-bf3c-e10552ab74fc
00:34:59.486    00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters'
00:34:59.745   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 ))
00:34:59.745    00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b0e0943-2761-486b-bf3c-e10552ab74fc
00:34:59.745    00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters'
00:35:00.004   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 ))
00:35:00.004   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:35:00.004  [2024-12-10 00:16:15.825929] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs
00:35:00.263   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b0e0943-2761-486b-bf3c-e10552ab74fc
00:35:00.263   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0
00:35:00.263   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b0e0943-2761-486b-bf3c-e10552ab74fc
00:35:00.263   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:35:00.263   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:35:00.263    00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:35:00.263   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:35:00.263    00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:35:00.263   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:35:00.263   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:35:00.263   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]]
00:35:00.263   00:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b0e0943-2761-486b-bf3c-e10552ab74fc
00:35:00.263  request:
00:35:00.263  {
00:35:00.263    "uuid": "4b0e0943-2761-486b-bf3c-e10552ab74fc",
00:35:00.263    "method": "bdev_lvol_get_lvstores",
00:35:00.263    "req_id": 1
00:35:00.263  }
00:35:00.263  Got JSON-RPC error response
00:35:00.263  response:
00:35:00.263  {
00:35:00.263    "code": -19,
00:35:00.263    "message": "No such device"
00:35:00.263  }
00:35:00.263   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1
00:35:00.263   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:35:00.263   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:35:00.263   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:35:00.263   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096
00:35:00.522  aio_bdev
00:35:00.522   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 879ae3db-ad04-41e5-be39-e6187762489f
00:35:00.522   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=879ae3db-ad04-41e5-be39-e6187762489f
00:35:00.522   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:35:00.522   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i
00:35:00.522   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:35:00.522   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:35:00.522   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine
00:35:00.787   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 879ae3db-ad04-41e5-be39-e6187762489f -t 2000
00:35:00.787  [
00:35:00.787    {
00:35:00.787      "name": "879ae3db-ad04-41e5-be39-e6187762489f",
00:35:00.787      "aliases": [
00:35:00.787        "lvs/lvol"
00:35:00.787      ],
00:35:00.787      "product_name": "Logical Volume",
00:35:00.787      "block_size": 4096,
00:35:00.787      "num_blocks": 38912,
00:35:00.787      "uuid": "879ae3db-ad04-41e5-be39-e6187762489f",
00:35:00.787      "assigned_rate_limits": {
00:35:00.787        "rw_ios_per_sec": 0,
00:35:00.787        "rw_mbytes_per_sec": 0,
00:35:00.787        "r_mbytes_per_sec": 0,
00:35:00.787        "w_mbytes_per_sec": 0
00:35:00.787      },
00:35:00.787      "claimed": false,
00:35:00.787      "zoned": false,
00:35:00.787      "supported_io_types": {
00:35:00.787        "read": true,
00:35:00.787        "write": true,
00:35:00.787        "unmap": true,
00:35:00.787        "flush": false,
00:35:00.787        "reset": true,
00:35:00.787        "nvme_admin": false,
00:35:00.787        "nvme_io": false,
00:35:00.787        "nvme_io_md": false,
00:35:00.787        "write_zeroes": true,
00:35:00.787        "zcopy": false,
00:35:00.787        "get_zone_info": false,
00:35:00.787        "zone_management": false,
00:35:00.787        "zone_append": false,
00:35:00.787        "compare": false,
00:35:00.787        "compare_and_write": false,
00:35:00.787        "abort": false,
00:35:00.787        "seek_hole": true,
00:35:00.787        "seek_data": true,
00:35:00.787        "copy": false,
00:35:00.787        "nvme_iov_md": false
00:35:00.787      },
00:35:00.787      "driver_specific": {
00:35:00.787        "lvol": {
00:35:00.787          "lvol_store_uuid": "4b0e0943-2761-486b-bf3c-e10552ab74fc",
00:35:00.787          "base_bdev": "aio_bdev",
00:35:00.787          "thin_provision": false,
00:35:00.787          "num_allocated_clusters": 38,
00:35:00.787          "snapshot": false,
00:35:00.787          "clone": false,
00:35:00.787          "esnap_clone": false
00:35:00.787        }
00:35:00.787      }
00:35:00.787    }
00:35:00.787  ]
00:35:00.787   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0
00:35:00.787    00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b0e0943-2761-486b-bf3c-e10552ab74fc
00:35:00.787    00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters'
00:35:01.046   00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 ))
00:35:01.046    00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4b0e0943-2761-486b-bf3c-e10552ab74fc
00:35:01.046    00:16:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters'
00:35:01.305   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 ))
00:35:01.305   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 879ae3db-ad04-41e5-be39-e6187762489f
00:35:01.563   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4b0e0943-2761-486b-bf3c-e10552ab74fc
00:35:01.821   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev
00:35:01.822   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev
00:35:01.822  
00:35:01.822  real	0m16.923s
00:35:01.822  user	0m34.454s
00:35:01.822  sys	0m3.702s
00:35:01.822   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable
00:35:01.822   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x
00:35:01.822  ************************************
00:35:01.822  END TEST lvs_grow_dirty
00:35:01.822  ************************************
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']'
00:35:02.082    00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n'
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]]
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0
00:35:02.082  nvmf_trace.0
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20}
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:35:02.082  rmmod nvme_tcp
00:35:02.082  rmmod nvme_fabrics
00:35:02.082  rmmod nvme_keyring
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3289425 ']'
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3289425
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3289425 ']'
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3289425
00:35:02.082    00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:35:02.082    00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3289425
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3289425'
00:35:02.082  killing process with pid 3289425
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3289425
00:35:02.082   00:16:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3289425
00:35:02.342   00:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:35:02.342   00:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:35:02.342   00:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:35:02.342   00:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr
00:35:02.342   00:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save
00:35:02.342   00:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:35:02.342   00:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore
00:35:02.342   00:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:35:02.342   00:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns
00:35:02.342   00:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:02.343   00:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:35:02.343    00:16:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:04.879   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:35:04.879  
00:35:04.879  real	0m42.367s
00:35:04.879  user	0m52.382s
00:35:04.879  sys	0m10.004s
00:35:04.879   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable
00:35:04.879   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x
00:35:04.879  ************************************
00:35:04.879  END TEST nvmf_lvs_grow
00:35:04.879  ************************************
00:35:04.879   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode
00:35:04.879   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:35:04.879   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:35:04.879   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:35:04.879  ************************************
00:35:04.879  START TEST nvmf_bdev_io_wait
00:35:04.879  ************************************
00:35:04.879   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode
00:35:04.879  * Looking for test storage...
00:35:04.879  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:35:04.879    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:35:04.879     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version
00:35:04.879     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:35:04.879    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:35:04.879    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:35:04.879    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l
00:35:04.879    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l
00:35:04.879    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-:
00:35:04.879    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1
00:35:04.879    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-:
00:35:04.879    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<'
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 ))
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:35:04.880  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:04.880  		--rc genhtml_branch_coverage=1
00:35:04.880  		--rc genhtml_function_coverage=1
00:35:04.880  		--rc genhtml_legend=1
00:35:04.880  		--rc geninfo_all_blocks=1
00:35:04.880  		--rc geninfo_unexecuted_blocks=1
00:35:04.880  		
00:35:04.880  		'
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:35:04.880  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:04.880  		--rc genhtml_branch_coverage=1
00:35:04.880  		--rc genhtml_function_coverage=1
00:35:04.880  		--rc genhtml_legend=1
00:35:04.880  		--rc geninfo_all_blocks=1
00:35:04.880  		--rc geninfo_unexecuted_blocks=1
00:35:04.880  		
00:35:04.880  		'
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:35:04.880  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:04.880  		--rc genhtml_branch_coverage=1
00:35:04.880  		--rc genhtml_function_coverage=1
00:35:04.880  		--rc genhtml_legend=1
00:35:04.880  		--rc geninfo_all_blocks=1
00:35:04.880  		--rc geninfo_unexecuted_blocks=1
00:35:04.880  		
00:35:04.880  		'
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:35:04.880  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:04.880  		--rc genhtml_branch_coverage=1
00:35:04.880  		--rc genhtml_function_coverage=1
00:35:04.880  		--rc genhtml_legend=1
00:35:04.880  		--rc geninfo_all_blocks=1
00:35:04.880  		--rc geninfo_unexecuted_blocks=1
00:35:04.880  		
00:35:04.880  		'
00:35:04.880   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:35:04.880    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:35:04.880     00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:35:04.880      00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:04.880      00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:04.880      00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:04.880      00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH
00:35:04.881      00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:04.881    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0
00:35:04.881    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:35:04.881    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:35:04.881    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:35:04.881    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:35:04.881    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:35:04.881    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:35:04.881    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:35:04.881    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:35:04.881    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:35:04.881    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:35:04.881    00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable
00:35:04.881   00:16:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=()
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=()
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=()
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=()
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=()
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=()
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=()
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:35:10.152  Found 0000:af:00.0 (0x8086 - 0x159b)
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:35:10.152  Found 0000:af:00.1 (0x8086 - 0x159b)
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:35:10.152  Found net devices under 0000:af:00.0: cvl_0_0
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:35:10.152  Found net devices under 0000:af:00.1: cvl_0_1
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:35:10.152   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:35:10.153   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:35:10.153   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:35:10.153   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:35:10.153   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:35:10.153   00:16:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:35:10.410   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:35:10.411  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:35:10.411  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms
00:35:10.411  
00:35:10.411  --- 10.0.0.2 ping statistics ---
00:35:10.411  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:10.411  rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:35:10.411  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:35:10.411  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms
00:35:10.411  
00:35:10.411  --- 10.0.0.1 ping statistics ---
00:35:10.411  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:10.411  rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable
00:35:10.411   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3293513
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3293513
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3293513 ']'
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:35:10.670  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:35:10.670  [2024-12-10 00:16:26.321092] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:35:10.670  [2024-12-10 00:16:26.321973] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:35:10.670  [2024-12-10 00:16:26.322005] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:35:10.670  [2024-12-10 00:16:26.396656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:35:10.670  [2024-12-10 00:16:26.438491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:35:10.670  [2024-12-10 00:16:26.438529] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:35:10.670  [2024-12-10 00:16:26.438536] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:35:10.670  [2024-12-10 00:16:26.438541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:35:10.670  [2024-12-10 00:16:26.438546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:35:10.670  [2024-12-10 00:16:26.440033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:35:10.670  [2024-12-10 00:16:26.440146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:35:10.670  [2024-12-10 00:16:26.440252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:35:10.670  [2024-12-10 00:16:26.440253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:35:10.670  [2024-12-10 00:16:26.440510] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.670   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:35:10.929  [2024-12-10 00:16:26.578051] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:35:10.929  [2024-12-10 00:16:26.578760] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:35:10.929  [2024-12-10 00:16:26.578787] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:35:10.929  [2024-12-10 00:16:26.578939] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:35:10.929  [2024-12-10 00:16:26.588871] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:35:10.929  Malloc0
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:35:10.929  [2024-12-10 00:16:26.661187] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3293538
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256
00:35:10.929    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3293540
00:35:10.929    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:35:10.929    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:35:10.929    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:35:10.929    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:35:10.929  {
00:35:10.929    "params": {
00:35:10.929      "name": "Nvme$subsystem",
00:35:10.929      "trtype": "$TEST_TRANSPORT",
00:35:10.929      "traddr": "$NVMF_FIRST_TARGET_IP",
00:35:10.929      "adrfam": "ipv4",
00:35:10.929      "trsvcid": "$NVMF_PORT",
00:35:10.929      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:35:10.929      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:35:10.929      "hdgst": ${hdgst:-false},
00:35:10.929      "ddgst": ${ddgst:-false}
00:35:10.929    },
00:35:10.929    "method": "bdev_nvme_attach_controller"
00:35:10.929  }
00:35:10.929  EOF
00:35:10.929  )")
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256
00:35:10.929    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json
00:35:10.929   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3293542
00:35:10.929    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:35:10.929    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:35:10.929    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:35:10.930  {
00:35:10.930    "params": {
00:35:10.930      "name": "Nvme$subsystem",
00:35:10.930      "trtype": "$TEST_TRANSPORT",
00:35:10.930      "traddr": "$NVMF_FIRST_TARGET_IP",
00:35:10.930      "adrfam": "ipv4",
00:35:10.930      "trsvcid": "$NVMF_PORT",
00:35:10.930      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:35:10.930      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:35:10.930      "hdgst": ${hdgst:-false},
00:35:10.930      "ddgst": ${ddgst:-false}
00:35:10.930    },
00:35:10.930    "method": "bdev_nvme_attach_controller"
00:35:10.930  }
00:35:10.930  EOF
00:35:10.930  )")
00:35:10.930   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256
00:35:10.930   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3293545
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json
00:35:10.930     00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:35:10.930   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:35:10.930   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:35:10.930  {
00:35:10.930    "params": {
00:35:10.930      "name": "Nvme$subsystem",
00:35:10.930      "trtype": "$TEST_TRANSPORT",
00:35:10.930      "traddr": "$NVMF_FIRST_TARGET_IP",
00:35:10.930      "adrfam": "ipv4",
00:35:10.930      "trsvcid": "$NVMF_PORT",
00:35:10.930      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:35:10.930      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:35:10.930      "hdgst": ${hdgst:-false},
00:35:10.930      "ddgst": ${ddgst:-false}
00:35:10.930    },
00:35:10.930    "method": "bdev_nvme_attach_controller"
00:35:10.930  }
00:35:10.930  EOF
00:35:10.930  )")
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=()
00:35:10.930     00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:35:10.930  {
00:35:10.930    "params": {
00:35:10.930      "name": "Nvme$subsystem",
00:35:10.930      "trtype": "$TEST_TRANSPORT",
00:35:10.930      "traddr": "$NVMF_FIRST_TARGET_IP",
00:35:10.930      "adrfam": "ipv4",
00:35:10.930      "trsvcid": "$NVMF_PORT",
00:35:10.930      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:35:10.930      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:35:10.930      "hdgst": ${hdgst:-false},
00:35:10.930      "ddgst": ${ddgst:-false}
00:35:10.930    },
00:35:10.930    "method": "bdev_nvme_attach_controller"
00:35:10.930  }
00:35:10.930  EOF
00:35:10.930  )")
00:35:10.930     00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:35:10.930   00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3293538
00:35:10.930     00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:35:10.930     00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:35:10.930     00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:35:10.930    "params": {
00:35:10.930      "name": "Nvme1",
00:35:10.930      "trtype": "tcp",
00:35:10.930      "traddr": "10.0.0.2",
00:35:10.930      "adrfam": "ipv4",
00:35:10.930      "trsvcid": "4420",
00:35:10.930      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:35:10.930      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:35:10.930      "hdgst": false,
00:35:10.930      "ddgst": false
00:35:10.930    },
00:35:10.930    "method": "bdev_nvme_attach_controller"
00:35:10.930  }'
00:35:10.930    00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq .
00:35:10.930     00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:35:10.930     00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:35:10.930    "params": {
00:35:10.930      "name": "Nvme1",
00:35:10.930      "trtype": "tcp",
00:35:10.930      "traddr": "10.0.0.2",
00:35:10.930      "adrfam": "ipv4",
00:35:10.930      "trsvcid": "4420",
00:35:10.930      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:35:10.930      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:35:10.930      "hdgst": false,
00:35:10.930      "ddgst": false
00:35:10.930    },
00:35:10.930    "method": "bdev_nvme_attach_controller"
00:35:10.930  }'
00:35:10.930     00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:35:10.930     00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:35:10.930    "params": {
00:35:10.930      "name": "Nvme1",
00:35:10.930      "trtype": "tcp",
00:35:10.930      "traddr": "10.0.0.2",
00:35:10.930      "adrfam": "ipv4",
00:35:10.930      "trsvcid": "4420",
00:35:10.930      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:35:10.930      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:35:10.930      "hdgst": false,
00:35:10.930      "ddgst": false
00:35:10.930    },
00:35:10.930    "method": "bdev_nvme_attach_controller"
00:35:10.930  }'
00:35:10.930     00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=,
00:35:10.930     00:16:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:35:10.930    "params": {
00:35:10.930      "name": "Nvme1",
00:35:10.930      "trtype": "tcp",
00:35:10.930      "traddr": "10.0.0.2",
00:35:10.930      "adrfam": "ipv4",
00:35:10.930      "trsvcid": "4420",
00:35:10.930      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:35:10.930      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:35:10.930      "hdgst": false,
00:35:10.930      "ddgst": false
00:35:10.930    },
00:35:10.930    "method": "bdev_nvme_attach_controller"
00:35:10.930  }'
00:35:10.930  [2024-12-10 00:16:26.714206] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:35:10.930  [2024-12-10 00:16:26.714207] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:35:10.930  [2024-12-10 00:16:26.714231] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:35:10.930  [2024-12-10 00:16:26.714260] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-10 00:16:26.714260] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ]
00:35:10.930  --proc-type=auto ]
00:35:10.930  [2024-12-10 00:16:26.714270] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ]
00:35:10.930  [2024-12-10 00:16:26.717153] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:35:10.930  [2024-12-10 00:16:26.717205] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ]
00:35:11.188  [2024-12-10 00:16:26.901840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:35:11.189  [2024-12-10 00:16:26.946208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7
00:35:11.189  [2024-12-10 00:16:26.994479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:35:11.189  [2024-12-10 00:16:27.040257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:35:11.447  [2024-12-10 00:16:27.094240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:35:11.447  [2024-12-10 00:16:27.146249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:35:11.447  [2024-12-10 00:16:27.147034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:35:11.447  [2024-12-10 00:16:27.188098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:35:11.447  Running I/O for 1 seconds...
00:35:11.447  Running I/O for 1 seconds...
00:35:11.447  Running I/O for 1 seconds...
00:35:11.704  Running I/O for 1 seconds...
00:35:12.639      11443.00 IOPS,    44.70 MiB/s
00:35:12.639                                                                                                  Latency(us)
00:35:12.639  
[2024-12-09T23:16:28.496Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:35:12.639  Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096)
00:35:12.639  	 Nvme1n1             :       1.01   11506.79      44.95       0.00     0.00   11088.53    3573.27   12607.88
00:35:12.639  
[2024-12-09T23:16:28.496Z]  ===================================================================================================================
00:35:12.639  
[2024-12-09T23:16:28.496Z]  Total                       :              11506.79      44.95       0.00     0.00   11088.53    3573.27   12607.88
00:35:12.639       9962.00 IOPS,    38.91 MiB/s
[2024-12-09T23:16:28.496Z]     11361.00 IOPS,    44.38 MiB/s
00:35:12.639                                                                                                  Latency(us)
00:35:12.639  
[2024-12-09T23:16:28.496Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:35:12.639  Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096)
00:35:12.639  	 Nvme1n1             :       1.00   11457.13      44.75       0.00     0.00   11148.96    2200.14   16602.45
00:35:12.639  
[2024-12-09T23:16:28.496Z]  ===================================================================================================================
00:35:12.639  
[2024-12-09T23:16:28.496Z]  Total                       :              11457.13      44.75       0.00     0.00   11148.96    2200.14   16602.45
00:35:12.639     242120.00 IOPS,   945.78 MiB/s
00:35:12.639                                                                                                  Latency(us)
00:35:12.639  
[2024-12-09T23:16:28.496Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:35:12.639  Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096)
00:35:12.639  	 Nvme1n1             :       1.00  241754.49     944.35       0.00     0.00     526.85     221.38    1497.97
00:35:12.639  
[2024-12-09T23:16:28.496Z]  ===================================================================================================================
00:35:12.639  
[2024-12-09T23:16:28.496Z]  Total                       :             241754.49     944.35       0.00     0.00     526.85     221.38    1497.97
00:35:12.639  
00:35:12.639                                                                                                  Latency(us)
00:35:12.639  
[2024-12-09T23:16:28.496Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:35:12.639  Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096)
00:35:12.639  	 Nvme1n1             :       1.05    9634.95      37.64       0.00     0.00   12732.97    4369.07   46436.94
00:35:12.639  
[2024-12-09T23:16:28.496Z]  ===================================================================================================================
00:35:12.639  
[2024-12-09T23:16:28.496Z]  Total                       :               9634.95      37.64       0.00     0.00   12732.97    4369.07   46436.94
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3293540
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3293542
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3293545
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20}
00:35:12.639   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:35:12.639  rmmod nvme_tcp
00:35:12.899  rmmod nvme_fabrics
00:35:12.899  rmmod nvme_keyring
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3293513 ']'
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3293513
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3293513 ']'
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3293513
00:35:12.899    00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:35:12.899    00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3293513
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3293513'
00:35:12.899  killing process with pid 3293513
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3293513
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3293513
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns
00:35:12.899   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:13.158   00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:35:13.158    00:16:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:15.058   00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:35:15.058  
00:35:15.058  real	0m10.634s
00:35:15.058  user	0m14.516s
00:35:15.058  sys	0m6.510s
00:35:15.058   00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable
00:35:15.058   00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x
00:35:15.058  ************************************
00:35:15.058  END TEST nvmf_bdev_io_wait
00:35:15.058  ************************************
00:35:15.058   00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode
00:35:15.058   00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:35:15.058   00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:35:15.058   00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:35:15.058  ************************************
00:35:15.058  START TEST nvmf_queue_depth
00:35:15.058  ************************************
00:35:15.058   00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode
00:35:15.317  * Looking for test storage...
00:35:15.317  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:35:15.317    00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:35:15.317     00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version
00:35:15.317     00:16:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:35:15.317    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:35:15.317    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:35:15.317    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l
00:35:15.317    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-:
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-:
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<'
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 ))
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:35:15.318  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:15.318  		--rc genhtml_branch_coverage=1
00:35:15.318  		--rc genhtml_function_coverage=1
00:35:15.318  		--rc genhtml_legend=1
00:35:15.318  		--rc geninfo_all_blocks=1
00:35:15.318  		--rc geninfo_unexecuted_blocks=1
00:35:15.318  		
00:35:15.318  		'
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:35:15.318  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:15.318  		--rc genhtml_branch_coverage=1
00:35:15.318  		--rc genhtml_function_coverage=1
00:35:15.318  		--rc genhtml_legend=1
00:35:15.318  		--rc geninfo_all_blocks=1
00:35:15.318  		--rc geninfo_unexecuted_blocks=1
00:35:15.318  		
00:35:15.318  		'
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:35:15.318  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:15.318  		--rc genhtml_branch_coverage=1
00:35:15.318  		--rc genhtml_function_coverage=1
00:35:15.318  		--rc genhtml_legend=1
00:35:15.318  		--rc geninfo_all_blocks=1
00:35:15.318  		--rc geninfo_unexecuted_blocks=1
00:35:15.318  		
00:35:15.318  		'
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:35:15.318  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:15.318  		--rc genhtml_branch_coverage=1
00:35:15.318  		--rc genhtml_function_coverage=1
00:35:15.318  		--rc genhtml_legend=1
00:35:15.318  		--rc geninfo_all_blocks=1
00:35:15.318  		--rc geninfo_unexecuted_blocks=1
00:35:15.318  		
00:35:15.318  		'
00:35:15.318   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:35:15.318    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:35:15.318     00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:35:15.318      00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:15.318      00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:15.318      00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:15.318      00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH
00:35:15.319      00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:15.319    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0
00:35:15.319    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:35:15.319    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:35:15.319    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:35:15.319    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:35:15.319    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:35:15.319    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:35:15.319    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:35:15.319    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:35:15.319    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:35:15.319    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:35:15.319    00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable
00:35:15.319   00:16:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=()
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=()
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=()
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=()
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=()
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=()
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=()
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:35:21.888  Found 0000:af:00.0 (0x8086 - 0x159b)
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:35:21.888  Found 0000:af:00.1 (0x8086 - 0x159b)
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:35:21.888  Found net devices under 0000:af:00.0: cvl_0_0
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]]
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:35:21.888   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:35:21.889  Found net devices under 0000:af:00.1: cvl_0_1
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:35:21.889  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:35:21.889  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms
00:35:21.889  
00:35:21.889  --- 10.0.0.2 ping statistics ---
00:35:21.889  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:21.889  rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:35:21.889  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:35:21.889  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms
00:35:21.889  
00:35:21.889  --- 10.0.0.1 ping statistics ---
00:35:21.889  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:21.889  rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3297247
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3297247
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3297247 ']'
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:35:21.889  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:35:21.889   00:16:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:35:21.889  [2024-12-10 00:16:37.015188] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:35:21.889  [2024-12-10 00:16:37.016095] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:35:21.889  [2024-12-10 00:16:37.016128] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:35:21.889  [2024-12-10 00:16:37.094357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:35:21.889  [2024-12-10 00:16:37.133679] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:35:21.889  [2024-12-10 00:16:37.133714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:35:21.889  [2024-12-10 00:16:37.133722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:35:21.889  [2024-12-10 00:16:37.133727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:35:21.889  [2024-12-10 00:16:37.133732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:35:21.889  [2024-12-10 00:16:37.134220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:35:21.889  [2024-12-10 00:16:37.200181] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:35:21.889  [2024-12-10 00:16:37.200375] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:35:21.889   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:35:21.889   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:35:21.889   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:35:21.889   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable
00:35:21.889   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:35:21.889   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:35:21.889   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:35:21.890  [2024-12-10 00:16:37.266849] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:35:21.890  Malloc0
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:35:21.890  [2024-12-10 00:16:37.342998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3297401
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3297401 /var/tmp/bdevperf.sock
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3297401 ']'
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...'
00:35:21.890  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:35:21.890  [2024-12-10 00:16:37.394524] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:35:21.890  [2024-12-10 00:16:37.394566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3297401 ]
00:35:21.890  [2024-12-10 00:16:37.468484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:35:21.890  [2024-12-10 00:16:37.508978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:35:21.890  NVMe0n1
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:21.890   00:16:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests
00:35:22.148  Running I/O for 10 seconds...
00:35:24.020      11720.00 IOPS,    45.78 MiB/s
[2024-12-09T23:16:40.816Z]     12285.00 IOPS,    47.99 MiB/s
[2024-12-09T23:16:42.195Z]     12211.33 IOPS,    47.70 MiB/s
[2024-12-09T23:16:43.130Z]     12291.50 IOPS,    48.01 MiB/s
[2024-12-09T23:16:44.066Z]     12341.60 IOPS,    48.21 MiB/s
[2024-12-09T23:16:45.002Z]     12446.83 IOPS,    48.62 MiB/s
[2024-12-09T23:16:45.938Z]     12461.71 IOPS,    48.68 MiB/s
[2024-12-09T23:16:46.872Z]     12530.12 IOPS,    48.95 MiB/s
[2024-12-09T23:16:48.248Z]     12525.67 IOPS,    48.93 MiB/s
[2024-12-09T23:16:48.248Z]     12577.40 IOPS,    49.13 MiB/s
00:35:32.391                                                                                                  Latency(us)
00:35:32.391  
[2024-12-09T23:16:48.248Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:35:32.391  Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096)
00:35:32.391  	 Verification LBA range: start 0x0 length 0x4000
00:35:32.391  	 NVMe0n1             :      10.07   12586.79      49.17       0.00     0.00   81084.11   19348.72   54426.09
00:35:32.391  
[2024-12-09T23:16:48.248Z]  ===================================================================================================================
00:35:32.391  
[2024-12-09T23:16:48.248Z]  Total                       :              12586.79      49.17       0.00     0.00   81084.11   19348.72   54426.09
00:35:32.391  {
00:35:32.391    "results": [
00:35:32.391      {
00:35:32.391        "job": "NVMe0n1",
00:35:32.391        "core_mask": "0x1",
00:35:32.391        "workload": "verify",
00:35:32.391        "status": "finished",
00:35:32.391        "verify_range": {
00:35:32.391          "start": 0,
00:35:32.391          "length": 16384
00:35:32.391        },
00:35:32.391        "queue_depth": 1024,
00:35:32.391        "io_size": 4096,
00:35:32.391        "runtime": 10.071987,
00:35:32.391        "iops": 12586.791464286043,
00:35:32.391        "mibps": 49.16715415736736,
00:35:32.391        "io_failed": 0,
00:35:32.391        "io_timeout": 0,
00:35:32.391        "avg_latency_us": 81084.11416190943,
00:35:32.391        "min_latency_us": 19348.72380952381,
00:35:32.391        "max_latency_us": 54426.08761904762
00:35:32.391      }
00:35:32.391    ],
00:35:32.391    "core_count": 1
00:35:32.391  }
00:35:32.391   00:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3297401
00:35:32.391   00:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3297401 ']'
00:35:32.391   00:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3297401
00:35:32.391    00:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:35:32.391   00:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:35:32.391    00:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3297401
00:35:32.391   00:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:35:32.391   00:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:35:32.391   00:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3297401'
00:35:32.391  killing process with pid 3297401
00:35:32.391   00:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3297401
00:35:32.391  Received shutdown signal, test time was about 10.000000 seconds
00:35:32.391  
00:35:32.391                                                                                                  Latency(us)
00:35:32.391  
[2024-12-09T23:16:48.248Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:35:32.391  
[2024-12-09T23:16:48.248Z]  ===================================================================================================================
00:35:32.391  
[2024-12-09T23:16:48.248Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:35:32.391   00:16:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3297401
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20}
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:35:32.391  rmmod nvme_tcp
00:35:32.391  rmmod nvme_fabrics
00:35:32.391  rmmod nvme_keyring
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3297247 ']'
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3297247
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3297247 ']'
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3297247
00:35:32.391    00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname
00:35:32.391   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:35:32.391    00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3297247
00:35:32.650   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:35:32.650   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:35:32.650   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3297247'
00:35:32.650  killing process with pid 3297247
00:35:32.650   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3297247
00:35:32.650   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3297247
00:35:32.650   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:35:32.650   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:35:32.651   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:35:32.651   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr
00:35:32.651   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save
00:35:32.651   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:35:32.651   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore
00:35:32.651   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:35:32.651   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns
00:35:32.651   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:32.651   00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:35:32.651    00:16:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:35.183   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:35:35.183  
00:35:35.183  real	0m19.600s
00:35:35.183  user	0m22.585s
00:35:35.183  sys	0m6.315s
00:35:35.183   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable
00:35:35.183   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x
00:35:35.183  ************************************
00:35:35.183  END TEST nvmf_queue_depth
00:35:35.183  ************************************
00:35:35.183   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode
00:35:35.183   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:35:35.183   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:35:35.183   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:35:35.183  ************************************
00:35:35.183  START TEST nvmf_target_multipath
00:35:35.183  ************************************
00:35:35.183   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode
00:35:35.183  * Looking for test storage...
00:35:35.183  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:35:35.183     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version
00:35:35.183     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-:
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-:
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<'
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in
00:35:35.183    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 ))
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:35:35.184  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:35.184  		--rc genhtml_branch_coverage=1
00:35:35.184  		--rc genhtml_function_coverage=1
00:35:35.184  		--rc genhtml_legend=1
00:35:35.184  		--rc geninfo_all_blocks=1
00:35:35.184  		--rc geninfo_unexecuted_blocks=1
00:35:35.184  		
00:35:35.184  		'
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:35:35.184  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:35.184  		--rc genhtml_branch_coverage=1
00:35:35.184  		--rc genhtml_function_coverage=1
00:35:35.184  		--rc genhtml_legend=1
00:35:35.184  		--rc geninfo_all_blocks=1
00:35:35.184  		--rc geninfo_unexecuted_blocks=1
00:35:35.184  		
00:35:35.184  		'
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:35:35.184  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:35.184  		--rc genhtml_branch_coverage=1
00:35:35.184  		--rc genhtml_function_coverage=1
00:35:35.184  		--rc genhtml_legend=1
00:35:35.184  		--rc geninfo_all_blocks=1
00:35:35.184  		--rc geninfo_unexecuted_blocks=1
00:35:35.184  		
00:35:35.184  		'
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:35:35.184  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:35.184  		--rc genhtml_branch_coverage=1
00:35:35.184  		--rc genhtml_function_coverage=1
00:35:35.184  		--rc genhtml_legend=1
00:35:35.184  		--rc geninfo_all_blocks=1
00:35:35.184  		--rc geninfo_unexecuted_blocks=1
00:35:35.184  		
00:35:35.184  		'
00:35:35.184   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:35:35.184     00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:35:35.184      00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:35.184      00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:35.184      00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:35.184      00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH
00:35:35.184      00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:35:35.184    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:35:35.185    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:35:35.185    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:35:35.185    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:35:35.185    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:35:35.185    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:35:35.185    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:35:35.185    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:35:35.185    00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable
00:35:35.185   00:16:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=()
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=()
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=()
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=()
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=()
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=()
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=()
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:35:41.755   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:35:41.756  Found 0000:af:00.0 (0x8086 - 0x159b)
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:35:41.756  Found 0000:af:00.1 (0x8086 - 0x159b)
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:35:41.756  Found net devices under 0000:af:00.0: cvl_0_0
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:35:41.756  Found net devices under 0000:af:00.1: cvl_0_1
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:35:41.756  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:35:41.756  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms
00:35:41.756  
00:35:41.756  --- 10.0.0.2 ping statistics ---
00:35:41.756  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:41.756  rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:35:41.756  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:35:41.756  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms
00:35:41.756  
00:35:41.756  --- 10.0.0.1 ping statistics ---
00:35:41.756  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:41.756  rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']'
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test'
00:35:41.756  only one NIC for nvmf test
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20}
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:35:41.756  rmmod nvme_tcp
00:35:41.756  rmmod nvme_fabrics
00:35:41.756  rmmod nvme_keyring
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:41.756   00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:35:41.756    00:16:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20}
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']'
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:35:43.133    00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:35:43.133  
00:35:43.133  real	0m8.362s
00:35:43.133  user	0m1.806s
00:35:43.133  sys	0m4.510s
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable
00:35:43.133   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x
00:35:43.133  ************************************
00:35:43.133  END TEST nvmf_target_multipath
00:35:43.134  ************************************
00:35:43.134   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode
00:35:43.134   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:35:43.134   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:35:43.134   00:16:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:35:43.393  ************************************
00:35:43.393  START TEST nvmf_zcopy
00:35:43.393  ************************************
00:35:43.393   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode
00:35:43.393  * Looking for test storage...
00:35:43.393  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:35:43.393    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:35:43.393     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version
00:35:43.393     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:35:43.393    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-:
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-:
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<'
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 ))
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:35:43.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:43.394  		--rc genhtml_branch_coverage=1
00:35:43.394  		--rc genhtml_function_coverage=1
00:35:43.394  		--rc genhtml_legend=1
00:35:43.394  		--rc geninfo_all_blocks=1
00:35:43.394  		--rc geninfo_unexecuted_blocks=1
00:35:43.394  		
00:35:43.394  		'
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:35:43.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:43.394  		--rc genhtml_branch_coverage=1
00:35:43.394  		--rc genhtml_function_coverage=1
00:35:43.394  		--rc genhtml_legend=1
00:35:43.394  		--rc geninfo_all_blocks=1
00:35:43.394  		--rc geninfo_unexecuted_blocks=1
00:35:43.394  		
00:35:43.394  		'
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:35:43.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:43.394  		--rc genhtml_branch_coverage=1
00:35:43.394  		--rc genhtml_function_coverage=1
00:35:43.394  		--rc genhtml_legend=1
00:35:43.394  		--rc geninfo_all_blocks=1
00:35:43.394  		--rc geninfo_unexecuted_blocks=1
00:35:43.394  		
00:35:43.394  		'
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:35:43.394  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:35:43.394  		--rc genhtml_branch_coverage=1
00:35:43.394  		--rc genhtml_function_coverage=1
00:35:43.394  		--rc genhtml_legend=1
00:35:43.394  		--rc geninfo_all_blocks=1
00:35:43.394  		--rc geninfo_unexecuted_blocks=1
00:35:43.394  		
00:35:43.394  		'
00:35:43.394   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:35:43.394    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:35:43.394     00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:35:43.394      00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:43.394      00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:43.395      00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:43.395      00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH
00:35:43.395      00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:35:43.395    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0
00:35:43.395    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:35:43.395    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:35:43.395    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:35:43.395    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:35:43.395    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:35:43.395    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:35:43.395    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:35:43.395    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:35:43.395    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:35:43.395    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0
00:35:43.395   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit
00:35:43.395   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:35:43.395   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:35:43.395   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs
00:35:43.395   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no
00:35:43.395   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns
00:35:43.395   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:35:43.395   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:35:43.395    00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:35:43.395   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:35:43.395   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:35:43.395   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable
00:35:43.395   00:16:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=()
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=()
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=()
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=()
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=()
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=()
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=()
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:35:49.961   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:35:49.962  Found 0000:af:00.0 (0x8086 - 0x159b)
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:35:49.962  Found 0000:af:00.1 (0x8086 - 0x159b)
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:35:49.962  Found net devices under 0000:af:00.0: cvl_0_0
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:35:49.962  Found net devices under 0000:af:00.1: cvl_0_1
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:35:49.962   00:17:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:35:49.962   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:35:49.962   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:35:49.962   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:35:49.962  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:35:49.962  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms
00:35:49.962  
00:35:49.962  --- 10.0.0.2 ping statistics ---
00:35:49.962  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:49.962  rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms
00:35:49.962   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:35:49.962  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:35:49.962  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms
00:35:49.962  
00:35:49.962  --- 10.0.0.1 ping statistics ---
00:35:49.962  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:35:49.962  rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms
00:35:49.962   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3305976
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3305976
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3305976 ']'
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:35:49.963  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:35:49.963  [2024-12-10 00:17:05.119426] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:35:49.963  [2024-12-10 00:17:05.120396] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:35:49.963  [2024-12-10 00:17:05.120437] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:35:49.963  [2024-12-10 00:17:05.201068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:35:49.963  [2024-12-10 00:17:05.241040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:35:49.963  [2024-12-10 00:17:05.241075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:35:49.963  [2024-12-10 00:17:05.241081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:35:49.963  [2024-12-10 00:17:05.241087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:35:49.963  [2024-12-10 00:17:05.241092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:35:49.963  [2024-12-10 00:17:05.241563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:35:49.963  [2024-12-10 00:17:05.308799] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:35:49.963  [2024-12-10 00:17:05.308990] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']'
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:35:49.963  [2024-12-10 00:17:05.374249] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:35:49.963  [2024-12-10 00:17:05.402450] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:35:49.963  malloc0
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:35:49.963   00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192
00:35:49.963    00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json
00:35:49.963    00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=()
00:35:49.963    00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config
00:35:49.963    00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:35:49.963    00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:35:49.963  {
00:35:49.963    "params": {
00:35:49.963      "name": "Nvme$subsystem",
00:35:49.963      "trtype": "$TEST_TRANSPORT",
00:35:49.963      "traddr": "$NVMF_FIRST_TARGET_IP",
00:35:49.963      "adrfam": "ipv4",
00:35:49.963      "trsvcid": "$NVMF_PORT",
00:35:49.963      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:35:49.963      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:35:49.963      "hdgst": ${hdgst:-false},
00:35:49.963      "ddgst": ${ddgst:-false}
00:35:49.963    },
00:35:49.963    "method": "bdev_nvme_attach_controller"
00:35:49.963  }
00:35:49.963  EOF
00:35:49.963  )")
00:35:49.963     00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat
00:35:49.963    00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq .
00:35:49.963     00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=,
00:35:49.964     00:17:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:35:49.964    "params": {
00:35:49.964      "name": "Nvme1",
00:35:49.964      "trtype": "tcp",
00:35:49.964      "traddr": "10.0.0.2",
00:35:49.964      "adrfam": "ipv4",
00:35:49.964      "trsvcid": "4420",
00:35:49.964      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:35:49.964      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:35:49.964      "hdgst": false,
00:35:49.964      "ddgst": false
00:35:49.964    },
00:35:49.964    "method": "bdev_nvme_attach_controller"
00:35:49.964  }'
00:35:49.964  [2024-12-10 00:17:05.498299] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:35:49.964  [2024-12-10 00:17:05.498352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3306005 ]
00:35:49.964  [2024-12-10 00:17:05.573362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:35:49.964  [2024-12-10 00:17:05.615845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:35:49.964  Running I/O for 10 seconds...
00:35:52.276       8558.00 IOPS,    66.86 MiB/s
[2024-12-09T23:17:09.118Z]      8644.50 IOPS,    67.54 MiB/s
[2024-12-09T23:17:10.129Z]      8654.67 IOPS,    67.61 MiB/s
[2024-12-09T23:17:11.064Z]      8667.75 IOPS,    67.72 MiB/s
[2024-12-09T23:17:12.001Z]      8656.80 IOPS,    67.63 MiB/s
[2024-12-09T23:17:12.936Z]      8663.00 IOPS,    67.68 MiB/s
[2024-12-09T23:17:13.872Z]      8667.57 IOPS,    67.72 MiB/s
[2024-12-09T23:17:15.255Z]      8681.88 IOPS,    67.83 MiB/s
[2024-12-09T23:17:16.193Z]      8687.56 IOPS,    67.87 MiB/s
[2024-12-09T23:17:16.193Z]      8692.60 IOPS,    67.91 MiB/s
00:36:00.336                                                                                                  Latency(us)
00:36:00.336  
[2024-12-09T23:17:16.193Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:36:00.336  Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192)
00:36:00.336  	 Verification LBA range: start 0x0 length 0x1000
00:36:00.336  	 Nvme1n1             :      10.01    8693.85      67.92       0.00     0.00   14680.53    1287.31   21096.35
00:36:00.336  
[2024-12-09T23:17:16.193Z]  ===================================================================================================================
00:36:00.336  
[2024-12-09T23:17:16.193Z]  Total                       :               8693.85      67.92       0.00     0.00   14680.53    1287.31   21096.35
00:36:00.336   00:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3307780
00:36:00.336   00:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable
00:36:00.336   00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:36:00.336   00:17:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192
00:36:00.336    00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json
00:36:00.336    00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=()
00:36:00.336    00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config
00:36:00.336    00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:36:00.336    00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:36:00.336  {
00:36:00.336    "params": {
00:36:00.336      "name": "Nvme$subsystem",
00:36:00.336      "trtype": "$TEST_TRANSPORT",
00:36:00.336      "traddr": "$NVMF_FIRST_TARGET_IP",
00:36:00.336      "adrfam": "ipv4",
00:36:00.336      "trsvcid": "$NVMF_PORT",
00:36:00.336      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:36:00.336      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:36:00.336      "hdgst": ${hdgst:-false},
00:36:00.336      "ddgst": ${ddgst:-false}
00:36:00.336    },
00:36:00.336    "method": "bdev_nvme_attach_controller"
00:36:00.336  }
00:36:00.336  EOF
00:36:00.336  )")
00:36:00.336  [2024-12-10 00:17:16.005900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.336  [2024-12-10 00:17:16.005935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.336     00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat
00:36:00.336    00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq .
00:36:00.336     00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=,
00:36:00.336     00:17:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:36:00.336    "params": {
00:36:00.336      "name": "Nvme1",
00:36:00.336      "trtype": "tcp",
00:36:00.336      "traddr": "10.0.0.2",
00:36:00.336      "adrfam": "ipv4",
00:36:00.336      "trsvcid": "4420",
00:36:00.336      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:36:00.336      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:36:00.336      "hdgst": false,
00:36:00.336      "ddgst": false
00:36:00.336    },
00:36:00.336    "method": "bdev_nvme_attach_controller"
00:36:00.336  }'
00:36:00.336  [2024-12-10 00:17:16.017868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.336  [2024-12-10 00:17:16.017881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.336  [2024-12-10 00:17:16.029865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.336  [2024-12-10 00:17:16.029876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.336  [2024-12-10 00:17:16.041863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.336  [2024-12-10 00:17:16.041874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.336  [2024-12-10 00:17:16.048627] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:36:00.336  [2024-12-10 00:17:16.048670] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3307780 ]
00:36:00.337  [2024-12-10 00:17:16.053866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.337  [2024-12-10 00:17:16.053876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.337  [2024-12-10 00:17:16.065863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.337  [2024-12-10 00:17:16.065873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.337  [2024-12-10 00:17:16.077867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.337  [2024-12-10 00:17:16.077879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.337  [2024-12-10 00:17:16.089863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.337  [2024-12-10 00:17:16.089873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.337  [2024-12-10 00:17:16.101863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.337  [2024-12-10 00:17:16.101873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.337  [2024-12-10 00:17:16.113863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.337  [2024-12-10 00:17:16.113873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.337  [2024-12-10 00:17:16.123843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:36:00.337  [2024-12-10 00:17:16.125862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.337  [2024-12-10 00:17:16.125875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.337  [2024-12-10 00:17:16.137866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.337  [2024-12-10 00:17:16.137879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.337  [2024-12-10 00:17:16.149865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.337  [2024-12-10 00:17:16.149875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.337  [2024-12-10 00:17:16.161865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.337  [2024-12-10 00:17:16.161878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.337  [2024-12-10 00:17:16.163475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:36:00.337  [2024-12-10 00:17:16.173878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.337  [2024-12-10 00:17:16.173894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.337  [2024-12-10 00:17:16.185874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.337  [2024-12-10 00:17:16.185895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.197869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.197883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.209866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.209880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.221867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.221882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.233866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.233878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.245875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.245892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.257877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.257895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.269875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.269893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.281871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.281885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.293863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.293873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.305863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.305873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.317866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.317880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.329871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.329886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.341871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.341887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.353868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.353884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  Running I/O for 5 seconds...
00:36:00.596  [2024-12-10 00:17:16.367234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.367253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.378666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.378687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.391716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.391737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.406571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.406590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.416929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.416949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.431687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.431706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.596  [2024-12-10 00:17:16.446319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.596  [2024-12-10 00:17:16.446337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.461196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.461216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.475756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.475776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.490156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.490181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.503624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.503643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.518406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.518426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.530821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.530841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.542085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.542104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.555949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.555974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.570315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.570333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.582442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.582471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.595547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.595566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.605553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.605573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.619464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.619484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.634550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.634569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.649566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.649586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.663643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.663662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.677875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.677894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.690621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.690639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:00.855  [2024-12-10 00:17:16.703898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:00.855  [2024-12-10 00:17:16.703917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.114  [2024-12-10 00:17:16.718446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.114  [2024-12-10 00:17:16.718465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.114  [2024-12-10 00:17:16.732909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.114  [2024-12-10 00:17:16.732928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.114  [2024-12-10 00:17:16.747486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.114  [2024-12-10 00:17:16.747506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.114  [2024-12-10 00:17:16.761728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.114  [2024-12-10 00:17:16.761747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.114  [2024-12-10 00:17:16.776038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.114  [2024-12-10 00:17:16.776058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.114  [2024-12-10 00:17:16.790847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.114  [2024-12-10 00:17:16.790867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.114  [2024-12-10 00:17:16.806107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.114  [2024-12-10 00:17:16.806126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.114  [2024-12-10 00:17:16.819331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.114  [2024-12-10 00:17:16.819350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.114  [2024-12-10 00:17:16.830215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.114  [2024-12-10 00:17:16.830234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.114  [2024-12-10 00:17:16.845171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.114  [2024-12-10 00:17:16.845190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.114  [2024-12-10 00:17:16.859825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.114  [2024-12-10 00:17:16.859849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.114  [2024-12-10 00:17:16.874045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.115  [2024-12-10 00:17:16.874064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.115  [2024-12-10 00:17:16.887543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.115  [2024-12-10 00:17:16.887562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.115  [2024-12-10 00:17:16.901830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.115  [2024-12-10 00:17:16.901851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.115  [2024-12-10 00:17:16.912406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.115  [2024-12-10 00:17:16.912427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.115  [2024-12-10 00:17:16.926838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.115  [2024-12-10 00:17:16.926858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.115  [2024-12-10 00:17:16.942206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.115  [2024-12-10 00:17:16.942226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.115  [2024-12-10 00:17:16.954803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.115  [2024-12-10 00:17:16.954822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.115  [2024-12-10 00:17:16.969851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.115  [2024-12-10 00:17:16.969870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:16.983369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:16.983389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:16.994212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:16.994232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.007903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.007923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.022515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.022534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.033703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.033722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.047828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.047848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.062793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.062813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.078255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.078275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.091691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.091711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.106718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.106737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.121715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.121741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.135421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.135440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.150081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.150100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.163391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.163411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.178145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.178165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.193875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.193895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.207618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.207638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.374  [2024-12-10 00:17:17.221813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.374  [2024-12-10 00:17:17.221832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.633  [2024-12-10 00:17:17.235852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.633  [2024-12-10 00:17:17.235872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.633  [2024-12-10 00:17:17.250726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.633  [2024-12-10 00:17:17.250746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.633  [2024-12-10 00:17:17.266002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.633  [2024-12-10 00:17:17.266023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.633  [2024-12-10 00:17:17.279787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.633  [2024-12-10 00:17:17.279808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.633  [2024-12-10 00:17:17.294109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.633  [2024-12-10 00:17:17.294129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.633  [2024-12-10 00:17:17.305829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.633  [2024-12-10 00:17:17.305850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.633  [2024-12-10 00:17:17.319876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.633  [2024-12-10 00:17:17.319898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.633  [2024-12-10 00:17:17.334380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.633  [2024-12-10 00:17:17.334399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.633  [2024-12-10 00:17:17.346738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.633  [2024-12-10 00:17:17.346756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.633  [2024-12-10 00:17:17.359100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.633  [2024-12-10 00:17:17.359119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.633      16932.00 IOPS,   132.28 MiB/s
[2024-12-09T23:17:17.490Z] [2024-12-10 00:17:17.370129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.633  [2024-12-10 00:17:17.370148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.633  [2024-12-10 00:17:17.383213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.634  [2024-12-10 00:17:17.383237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.634  [2024-12-10 00:17:17.398099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.634  [2024-12-10 00:17:17.398120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.634  [2024-12-10 00:17:17.409874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.634  [2024-12-10 00:17:17.409894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.634  [2024-12-10 00:17:17.423601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.634  [2024-12-10 00:17:17.423620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.634  [2024-12-10 00:17:17.438195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.634  [2024-12-10 00:17:17.438213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.634  [2024-12-10 00:17:17.453482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.634  [2024-12-10 00:17:17.453501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.634  [2024-12-10 00:17:17.467751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.634  [2024-12-10 00:17:17.467771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.634  [2024-12-10 00:17:17.482553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.634  [2024-12-10 00:17:17.482571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.497594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.497613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.511769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.511788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.526022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.526041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.537128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.537147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.551756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.551775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.566163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.566188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.581936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.581956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.595340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.595360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.609722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.609740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.621960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.621980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.635769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.635788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.650098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.650117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.660853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.660872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.675560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.675578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.690026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.690045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.703945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.703964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.718578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.718596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.733357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.733376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:01.893  [2024-12-10 00:17:17.747000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:01.893  [2024-12-10 00:17:17.747019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.152  [2024-12-10 00:17:17.762176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.152  [2024-12-10 00:17:17.762194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.152  [2024-12-10 00:17:17.777736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.152  [2024-12-10 00:17:17.777755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.152  [2024-12-10 00:17:17.791396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.152  [2024-12-10 00:17:17.791415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.152  [2024-12-10 00:17:17.802208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.152  [2024-12-10 00:17:17.802226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.152  [2024-12-10 00:17:17.818093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.152  [2024-12-10 00:17:17.818112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.152  [2024-12-10 00:17:17.829381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.152  [2024-12-10 00:17:17.829399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.152  [2024-12-10 00:17:17.843885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.152  [2024-12-10 00:17:17.843903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.152  [2024-12-10 00:17:17.858468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.152  [2024-12-10 00:17:17.858487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.152  [2024-12-10 00:17:17.874227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.152  [2024-12-10 00:17:17.874245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.152  [2024-12-10 00:17:17.889644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.153  [2024-12-10 00:17:17.889663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.153  [2024-12-10 00:17:17.900673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.153  [2024-12-10 00:17:17.900693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.153  [2024-12-10 00:17:17.915305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.153  [2024-12-10 00:17:17.915324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.153  [2024-12-10 00:17:17.929856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.153  [2024-12-10 00:17:17.929875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.153  [2024-12-10 00:17:17.940752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.153  [2024-12-10 00:17:17.940771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.153  [2024-12-10 00:17:17.954983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.153  [2024-12-10 00:17:17.955002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.153  [2024-12-10 00:17:17.969492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.153  [2024-12-10 00:17:17.969511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.153  [2024-12-10 00:17:17.983239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.153  [2024-12-10 00:17:17.983259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.153  [2024-12-10 00:17:17.997452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.153  [2024-12-10 00:17:17.997471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.414  [2024-12-10 00:17:18.011281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.011300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.025422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.025442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.039847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.039866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.054186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.054205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.069276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.069296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.083390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.083410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.098202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.098220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.113062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.113081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.127607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.127626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.142583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.142602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.157852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.157871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.169184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.169202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.183764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.183784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.198153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.198177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.210987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.211007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.223380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.223399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.238207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.238225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.253808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.253827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.415  [2024-12-10 00:17:18.265071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.415  [2024-12-10 00:17:18.265090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.279573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.279592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.294245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.294264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.309222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.309242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.324052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.324073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.339200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.339221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.353866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.353885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676      16952.50 IOPS,   132.44 MiB/s
[2024-12-09T23:17:18.533Z] [2024-12-10 00:17:18.366108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.366128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.379352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.379372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.393935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.393956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.404471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.404490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.418702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.418721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.433972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.433998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.446238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.446257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.459858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.459877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.474697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.474716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.489462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.489481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.502968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.502988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.517922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.517942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.676  [2024-12-10 00:17:18.530916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.676  [2024-12-10 00:17:18.530936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.545394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.545415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.558769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.558788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.573735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.573754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.588161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.588187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.602902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.602921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.617961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.617979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.631043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.631062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.643533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.643553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.658537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.658556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.673483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.673503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.687586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.687605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.702358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.702389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.713126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.713145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.727430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.727450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.741891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.741910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.754616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.754635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.767086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.767105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:02.935  [2024-12-10 00:17:18.782161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:02.935  [2024-12-10 00:17:18.782185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.194  [2024-12-10 00:17:18.798303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.798323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.810578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.810597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.823774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.823793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.838896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.838915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.854117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.854137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.865461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.865481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.879955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.879974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.894328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.894347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.905258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.905276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.920103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.920121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.934195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.934213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.949897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.949916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.963170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.963195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.974254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.974272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:18.987245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:18.987263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:19.001647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:19.001666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:19.015824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:19.015844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:19.030218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:19.030237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.195  [2024-12-10 00:17:19.045258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.195  [2024-12-10 00:17:19.045277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.059845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.059864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.074335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.074353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.089640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.089660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.103134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.103153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.117623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.117643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.131718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.131737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.146145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.146163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.160898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.160917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.175515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.175535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.190410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.190429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.206069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.206089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.218310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.218329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.231536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.231561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.246517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.246537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.261669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.261689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.274426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.274445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.289581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.289601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.454  [2024-12-10 00:17:19.302245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.454  [2024-12-10 00:17:19.302263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.315534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.315553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.330211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.330229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.345838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.345858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.356905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.356923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713      16977.00 IOPS,   132.63 MiB/s
[2024-12-09T23:17:19.570Z] [2024-12-10 00:17:19.371347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.371366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.385733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.385752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.398926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.398944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.410465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.410484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.425491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.425509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.439043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.439062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.453618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.453638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.466491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.466512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.479265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.479285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.493971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.493990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.504917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.504936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.519459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.519483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.534373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.534400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.550242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.550260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.713  [2024-12-10 00:17:19.563632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.713  [2024-12-10 00:17:19.563651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.972  [2024-12-10 00:17:19.578195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.972  [2024-12-10 00:17:19.578213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.972  [2024-12-10 00:17:19.593301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.972  [2024-12-10 00:17:19.593323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.972  [2024-12-10 00:17:19.608090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.972  [2024-12-10 00:17:19.608109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.972  [2024-12-10 00:17:19.622789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.972  [2024-12-10 00:17:19.622808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.972  [2024-12-10 00:17:19.637920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.972  [2024-12-10 00:17:19.637939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.972  [2024-12-10 00:17:19.651354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.972  [2024-12-10 00:17:19.651373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.972  [2024-12-10 00:17:19.662020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.972  [2024-12-10 00:17:19.662038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.972  [2024-12-10 00:17:19.675866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.972  [2024-12-10 00:17:19.675885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.972  [2024-12-10 00:17:19.690484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.972  [2024-12-10 00:17:19.690502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.972  [2024-12-10 00:17:19.705324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.972  [2024-12-10 00:17:19.705343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.972  [2024-12-10 00:17:19.718915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.972  [2024-12-10 00:17:19.718938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.972  [2024-12-10 00:17:19.733629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.972  [2024-12-10 00:17:19.733649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.973  [2024-12-10 00:17:19.747011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.973  [2024-12-10 00:17:19.747030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.973  [2024-12-10 00:17:19.762244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.973  [2024-12-10 00:17:19.762264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.973  [2024-12-10 00:17:19.777853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.973  [2024-12-10 00:17:19.777873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.973  [2024-12-10 00:17:19.789415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.973  [2024-12-10 00:17:19.789434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.973  [2024-12-10 00:17:19.803313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.973  [2024-12-10 00:17:19.803332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.973  [2024-12-10 00:17:19.817901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.973  [2024-12-10 00:17:19.817920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:03.973  [2024-12-10 00:17:19.829026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:03.973  [2024-12-10 00:17:19.829045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:19.843587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:19.843606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:19.858414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:19.858437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:19.873194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:19.873215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:19.887957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:19.887977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:19.902415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:19.902433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:19.917826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:19.917845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:19.931485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:19.931505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:19.946703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:19.946723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:19.961741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:19.961761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:19.975574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:19.975594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:19.990557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:19.990576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:20.006192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:20.006212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:20.022019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:20.022044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:20.034684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:20.034707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:20.049513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:20.049533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:20.062954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:20.062974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.231  [2024-12-10 00:17:20.077921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.231  [2024-12-10 00:17:20.077944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.089108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.089127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.104022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.104042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.119192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.119212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.133898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.133918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.147796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.147815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.162001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.162020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.174744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.174763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.189140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.189159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.203331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.203350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.217947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.217968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.229119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.229137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.243662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.243681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.258331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.258350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.273843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.273862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.287877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.287902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.302916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.302935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.317190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.317210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.331701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.331720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.490  [2024-12-10 00:17:20.346069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.490  [2024-12-10 00:17:20.346089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.359653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.359672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750      16962.75 IOPS,   132.52 MiB/s
[2024-12-09T23:17:20.607Z] [2024-12-10 00:17:20.374773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.374792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.389203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.389223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.402928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.402946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.418400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.418419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.433844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.433864] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.446673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.446692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.459230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.459249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.469656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.469675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.483616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.483636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.498325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.498346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.513883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.513902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.527573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.527593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.542298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.542316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.557944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.557969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.570580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.570598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.586736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.586756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:04.750  [2024-12-10 00:17:20.601739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:04.750  [2024-12-10 00:17:20.601758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.615600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.615620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.630929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.630948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.645814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.645833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.658469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.658487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.674147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.674176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.690155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.690180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.705663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.705682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.718631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.718650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.731395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.731414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.741646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.741665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.755917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.755936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.770679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.770697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.785787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.785808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.799617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.799636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.813928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.813947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.825924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.825946] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.839475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.839494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.009  [2024-12-10 00:17:20.854306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.009  [2024-12-10 00:17:20.854324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.268  [2024-12-10 00:17:20.869923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.268  [2024-12-10 00:17:20.869943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.268  [2024-12-10 00:17:20.883818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.268  [2024-12-10 00:17:20.883838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.268  [2024-12-10 00:17:20.898369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.268  [2024-12-10 00:17:20.898388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.268  [2024-12-10 00:17:20.914028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.268  [2024-12-10 00:17:20.914046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.268  [2024-12-10 00:17:20.927098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.268  [2024-12-10 00:17:20.927117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.268  [2024-12-10 00:17:20.942515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.268  [2024-12-10 00:17:20.942533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.268  [2024-12-10 00:17:20.957403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.269  [2024-12-10 00:17:20.957433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.269  [2024-12-10 00:17:20.971452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.269  [2024-12-10 00:17:20.971471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.269  [2024-12-10 00:17:20.986112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.269  [2024-12-10 00:17:20.986131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.269  [2024-12-10 00:17:20.996643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.269  [2024-12-10 00:17:20.996662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.269  [2024-12-10 00:17:21.011401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.269  [2024-12-10 00:17:21.011420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.269  [2024-12-10 00:17:21.026121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.269  [2024-12-10 00:17:21.026140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.269  [2024-12-10 00:17:21.037402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.269  [2024-12-10 00:17:21.037432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.269  [2024-12-10 00:17:21.051743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.269  [2024-12-10 00:17:21.051762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.269  [2024-12-10 00:17:21.066247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.269  [2024-12-10 00:17:21.066265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.269  [2024-12-10 00:17:21.081543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.269  [2024-12-10 00:17:21.081562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.269  [2024-12-10 00:17:21.095851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.269  [2024-12-10 00:17:21.095871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.269  [2024-12-10 00:17:21.110659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.269  [2024-12-10 00:17:21.110679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.126577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.126596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.141464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.141484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.155721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.155740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.170082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.170102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.182245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.182265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.195735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.195754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.210398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.210417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.225513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.225534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.239472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.239491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.254029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.254049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.264864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.264883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.279904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.279923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.294530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.294550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.310179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.310198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.326375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.326398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.341833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.341853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.354801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.354821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  [2024-12-10 00:17:21.369900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.369921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528      16921.00 IOPS,   132.20 MiB/s
[2024-12-09T23:17:21.385Z] [2024-12-10 00:17:21.377874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.528  [2024-12-10 00:17:21.377893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.528  
00:36:05.528                                                                                                  Latency(us)
00:36:05.528  
[2024-12-09T23:17:21.385Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:36:05.528  Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192)
00:36:05.528  	 Nvme1n1             :       5.01   16923.15     132.21       0.00     0.00    7556.60    2090.91   13356.86
00:36:05.528  
[2024-12-09T23:17:21.385Z]  ===================================================================================================================
00:36:05.528  
[2024-12-09T23:17:21.385Z]  Total                       :              16923.15     132.21       0.00     0.00    7556.60    2090.91   13356.86
00:36:05.787  [2024-12-10 00:17:21.389873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.787  [2024-12-10 00:17:21.389891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.787  [2024-12-10 00:17:21.401875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.787  [2024-12-10 00:17:21.401890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.787  [2024-12-10 00:17:21.413880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.787  [2024-12-10 00:17:21.413901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.787  [2024-12-10 00:17:21.425871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.787  [2024-12-10 00:17:21.425884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.787  [2024-12-10 00:17:21.437874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.787  [2024-12-10 00:17:21.437888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.787  [2024-12-10 00:17:21.449869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.787  [2024-12-10 00:17:21.449883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.787  [2024-12-10 00:17:21.461872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.787  [2024-12-10 00:17:21.461889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.787  [2024-12-10 00:17:21.473868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.787  [2024-12-10 00:17:21.473883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.787  [2024-12-10 00:17:21.485870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.787  [2024-12-10 00:17:21.485885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.787  [2024-12-10 00:17:21.497865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.787  [2024-12-10 00:17:21.497875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.787  [2024-12-10 00:17:21.509870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.787  [2024-12-10 00:17:21.509882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.787  [2024-12-10 00:17:21.521867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.787  [2024-12-10 00:17:21.521880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.787  [2024-12-10 00:17:21.533866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use
00:36:05.787  [2024-12-10 00:17:21.533879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:05.787  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3307780) - No such process
00:36:05.787   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3307780
00:36:05.787   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1
00:36:05.787   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:05.787   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:36:05.787   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:05.787   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000
00:36:05.787   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:05.787   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:36:05.787  delay0
00:36:05.787   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:05.787   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1
00:36:05.787   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:05.788   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:36:05.788   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:05.788   00:17:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1'
00:36:06.046  [2024-12-10 00:17:21.687043] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral
00:36:12.612  Initializing NVMe Controllers
00:36:12.612  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:36:12.612  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0
00:36:12.612  Initialization complete. Launching workers.
00:36:12.612  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 268, failed: 16838
00:36:12.612  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 17006, failed to submit 100
00:36:12.612  	 success 16907, unsuccessful 99, failed 0
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20}
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:36:12.612  rmmod nvme_tcp
00:36:12.612  rmmod nvme_fabrics
00:36:12.612  rmmod nvme_keyring
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3305976 ']'
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3305976
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3305976 ']'
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3305976
00:36:12.612    00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:36:12.612    00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3305976
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:36:12.612   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:36:12.613   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3305976'
00:36:12.613  killing process with pid 3305976
00:36:12.613   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3305976
00:36:12.613   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3305976
00:36:12.872   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:36:12.872   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:36:12.872   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:36:12.872   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr
00:36:12.872   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save
00:36:12.873   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:36:12.873   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore
00:36:12.873   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:36:12.873   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns
00:36:12.873   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:36:12.873   00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:36:12.873    00:17:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:36:14.778   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:36:14.778  
00:36:14.778  real	0m31.554s
00:36:14.778  user	0m40.634s
00:36:14.778  sys	0m12.661s
00:36:14.778   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:14.778   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x
00:36:14.778  ************************************
00:36:14.778  END TEST nvmf_zcopy
00:36:14.778  ************************************
00:36:14.778   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode
00:36:14.778   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:36:14.778   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:36:14.778   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:36:14.778  ************************************
00:36:14.778  START TEST nvmf_nmic
00:36:14.778  ************************************
00:36:14.778   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode
00:36:15.041  * Looking for test storage...
00:36:15.041  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:36:15.041     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version
00:36:15.041     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-:
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-:
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<'
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 ))
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:36:15.041     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1
00:36:15.041     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1
00:36:15.041     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:36:15.041     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1
00:36:15.041     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2
00:36:15.041     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2
00:36:15.041     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:36:15.041     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:36:15.041  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:15.041  		--rc genhtml_branch_coverage=1
00:36:15.041  		--rc genhtml_function_coverage=1
00:36:15.041  		--rc genhtml_legend=1
00:36:15.041  		--rc geninfo_all_blocks=1
00:36:15.041  		--rc geninfo_unexecuted_blocks=1
00:36:15.041  		
00:36:15.041  		'
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:36:15.041  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:15.041  		--rc genhtml_branch_coverage=1
00:36:15.041  		--rc genhtml_function_coverage=1
00:36:15.041  		--rc genhtml_legend=1
00:36:15.041  		--rc geninfo_all_blocks=1
00:36:15.041  		--rc geninfo_unexecuted_blocks=1
00:36:15.041  		
00:36:15.041  		'
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:36:15.041  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:15.041  		--rc genhtml_branch_coverage=1
00:36:15.041  		--rc genhtml_function_coverage=1
00:36:15.041  		--rc genhtml_legend=1
00:36:15.041  		--rc geninfo_all_blocks=1
00:36:15.041  		--rc geninfo_unexecuted_blocks=1
00:36:15.041  		
00:36:15.041  		'
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:36:15.041  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:15.041  		--rc genhtml_branch_coverage=1
00:36:15.041  		--rc genhtml_function_coverage=1
00:36:15.041  		--rc genhtml_legend=1
00:36:15.041  		--rc geninfo_all_blocks=1
00:36:15.041  		--rc geninfo_unexecuted_blocks=1
00:36:15.041  		
00:36:15.041  		'
00:36:15.041   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:36:15.041     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:36:15.041     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:36:15.041    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:36:15.042     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob
00:36:15.042     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:36:15.042     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:36:15.042     00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:36:15.042      00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:15.042      00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:15.042      00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:15.042      00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH
00:36:15.042      00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:36:15.042    00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable
00:36:15.042   00:17:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=()
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=()
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=()
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=()
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=()
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=()
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=()
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:36:21.613  Found 0000:af:00.0 (0x8086 - 0x159b)
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:36:21.613  Found 0000:af:00.1 (0x8086 - 0x159b)
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:36:21.613  Found net devices under 0000:af:00.0: cvl_0_0
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:36:21.613   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]]
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:36:21.614  Found net devices under 0000:af:00.1: cvl_0_1
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:36:21.614  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:36:21.614  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms
00:36:21.614  
00:36:21.614  --- 10.0.0.2 ping statistics ---
00:36:21.614  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:36:21.614  rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:36:21.614  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:36:21.614  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms
00:36:21.614  
00:36:21.614  --- 10.0.0.1 ping statistics ---
00:36:21.614  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:36:21.614  rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3313047
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3313047
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3313047 ']'
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:36:21.614  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable
00:36:21.614   00:17:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:21.614  [2024-12-10 00:17:36.863484] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:36:21.614  [2024-12-10 00:17:36.864454] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:36:21.614  [2024-12-10 00:17:36.864496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:36:21.614  [2024-12-10 00:17:36.945488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:36:21.614  [2024-12-10 00:17:36.986609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:36:21.614  [2024-12-10 00:17:36.986649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:36:21.614  [2024-12-10 00:17:36.986656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:36:21.614  [2024-12-10 00:17:36.986662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:36:21.614  [2024-12-10 00:17:36.986667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:36:21.614  [2024-12-10 00:17:36.987987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:36:21.614  [2024-12-10 00:17:36.988095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:36:21.614  [2024-12-10 00:17:36.988207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:36:21.614  [2024-12-10 00:17:36.988207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:36:21.614  [2024-12-10 00:17:37.056795] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:36:21.614  [2024-12-10 00:17:37.057573] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:36:21.614  [2024-12-10 00:17:37.057845] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:36:21.614  [2024-12-10 00:17:37.058383] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:36:21.614  [2024-12-10 00:17:37.058407] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:36:21.614   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:21.615  [2024-12-10 00:17:37.132920] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:21.615  Malloc0
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:21.615  [2024-12-10 00:17:37.213246] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems'
00:36:21.615  test case1: single bdev can't be used in multiple subsystems
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:21.615  [2024-12-10 00:17:37.244633] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target
00:36:21.615  [2024-12-10 00:17:37.244657] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1
00:36:21.615  [2024-12-10 00:17:37.244665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace
00:36:21.615  request:
00:36:21.615  {
00:36:21.615  "nqn": "nqn.2016-06.io.spdk:cnode2",
00:36:21.615  "namespace": {
00:36:21.615  "bdev_name": "Malloc0",
00:36:21.615  "no_auto_visible": false,
00:36:21.615  "hide_metadata": false
00:36:21.615  },
00:36:21.615  "method": "nvmf_subsystem_add_ns",
00:36:21.615  "req_id": 1
00:36:21.615  }
00:36:21.615  Got JSON-RPC error response
00:36:21.615  response:
00:36:21.615  {
00:36:21.615  "code": -32602,
00:36:21.615  "message": "Invalid parameters"
00:36:21.615  }
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']'
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.'
00:36:21.615   Adding namespace failed - expected result.
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths'
00:36:21.615  test case2: host connect to nvmf target in multiple paths
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:21.615  [2024-12-10 00:17:37.256730] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 ***
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:36:21.615   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:36:21.874   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421
00:36:21.874   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME
00:36:21.874   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0
00:36:21.874   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:36:21.874   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:36:21.874   00:17:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2
00:36:24.417   00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:36:24.417    00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:36:24.417    00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:36:24.417   00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:36:24.417   00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:36:24.417   00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0
00:36:24.417   00:17:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:36:24.417  [global]
00:36:24.417  thread=1
00:36:24.417  invalidate=1
00:36:24.417  rw=write
00:36:24.417  time_based=1
00:36:24.417  runtime=1
00:36:24.417  ioengine=libaio
00:36:24.417  direct=1
00:36:24.417  bs=4096
00:36:24.417  iodepth=1
00:36:24.417  norandommap=0
00:36:24.417  numjobs=1
00:36:24.417  
00:36:24.417  verify_dump=1
00:36:24.417  verify_backlog=512
00:36:24.417  verify_state_save=0
00:36:24.417  do_verify=1
00:36:24.417  verify=crc32c-intel
00:36:24.417  [job0]
00:36:24.417  filename=/dev/nvme0n1
00:36:24.417  Could not set queue depth (nvme0n1)
00:36:24.417  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:36:24.417  fio-3.35
00:36:24.417  Starting 1 thread
00:36:25.353  
00:36:25.353  job0: (groupid=0, jobs=1): err= 0: pid=3313813: Tue Dec 10 00:17:41 2024
00:36:25.353    read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec)
00:36:25.353      slat (nsec): min=6669, max=41601, avg=7683.67, stdev=1486.05
00:36:25.353      clat (usec): min=190, max=1456, avg=212.03, stdev=30.62
00:36:25.353       lat (usec): min=197, max=1463, avg=219.71, stdev=30.65
00:36:25.353      clat percentiles (usec):
00:36:25.353       |  1.00th=[  194],  5.00th=[  200], 10.00th=[  204], 20.00th=[  206],
00:36:25.353       | 30.00th=[  208], 40.00th=[  208], 50.00th=[  210], 60.00th=[  212],
00:36:25.353       | 70.00th=[  212], 80.00th=[  215], 90.00th=[  219], 95.00th=[  225],
00:36:25.353       | 99.00th=[  255], 99.50th=[  262], 99.90th=[  553], 99.95th=[  865],
00:36:25.353       | 99.99th=[ 1450]
00:36:25.353    write: IOPS=2610, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec); 0 zone resets
00:36:25.353      slat (nsec): min=9620, max=46566, avg=10980.17, stdev=1751.45
00:36:25.353      clat (usec): min=127, max=281, avg=150.56, stdev=29.44
00:36:25.353       lat (usec): min=139, max=327, avg=161.54, stdev=29.61
00:36:25.353      clat percentiles (usec):
00:36:25.353       |  1.00th=[  131],  5.00th=[  133], 10.00th=[  135], 20.00th=[  135],
00:36:25.353       | 30.00th=[  137], 40.00th=[  137], 50.00th=[  139], 60.00th=[  139],
00:36:25.353       | 70.00th=[  143], 80.00th=[  178], 90.00th=[  186], 95.00th=[  239],
00:36:25.353       | 99.00th=[  247], 99.50th=[  249], 99.90th=[  255], 99.95th=[  258],
00:36:25.353       | 99.99th=[  281]
00:36:25.353     bw (  KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1
00:36:25.353     iops        : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1
00:36:25.353    lat (usec)   : 250=99.11%, 500=0.83%, 750=0.02%, 1000=0.02%
00:36:25.353    lat (msec)   : 2=0.02%
00:36:25.353    cpu          : usr=5.10%, sys=7.00%, ctx=5173, majf=0, minf=1
00:36:25.353    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:25.353       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:25.353       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:25.353       issued rwts: total=2560,2613,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:25.353       latency   : target=0, window=0, percentile=100.00%, depth=1
00:36:25.353  
00:36:25.353  Run status group 0 (all jobs):
00:36:25.353     READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec
00:36:25.353    WRITE: bw=10.2MiB/s (10.7MB/s), 10.2MiB/s-10.2MiB/s (10.7MB/s-10.7MB/s), io=10.2MiB (10.7MB), run=1001-1001msec
00:36:25.353  
00:36:25.353  Disk stats (read/write):
00:36:25.353    nvme0n1: ios=2186/2560, merge=0/0, ticks=446/358, in_queue=804, util=91.28%
00:36:25.353   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:36:25.612  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s)
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20}
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:36:25.612  rmmod nvme_tcp
00:36:25.612  rmmod nvme_fabrics
00:36:25.612  rmmod nvme_keyring
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3313047 ']'
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3313047
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3313047 ']'
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3313047
00:36:25.612    00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname
00:36:25.612   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:36:25.612    00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3313047
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3313047'
00:36:25.872  killing process with pid 3313047
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3313047
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3313047
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:36:25.872   00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:36:25.872    00:17:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:36:28.409   00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:36:28.409  
00:36:28.409  real	0m13.129s
00:36:28.409  user	0m23.314s
00:36:28.409  sys	0m6.233s
00:36:28.409   00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:28.409   00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x
00:36:28.409  ************************************
00:36:28.409  END TEST nvmf_nmic
00:36:28.409  ************************************
00:36:28.409   00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode
00:36:28.409   00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:36:28.409   00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:36:28.409   00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:36:28.409  ************************************
00:36:28.409  START TEST nvmf_fio_target
00:36:28.409  ************************************
00:36:28.409   00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode
00:36:28.409  * Looking for test storage...
00:36:28.409  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:36:28.409     00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version
00:36:28.409     00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-:
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-:
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<'
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 ))
00:36:28.409    00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:36:28.409     00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1
00:36:28.409     00:17:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1
00:36:28.409     00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:36:28.409     00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1
00:36:28.409    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1
00:36:28.409     00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2
00:36:28.409     00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2
00:36:28.409     00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:36:28.409     00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2
00:36:28.409    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2
00:36:28.409    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:36:28.409    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:36:28.409    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0
00:36:28.409    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:36:28.409    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:36:28.409  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:28.409  		--rc genhtml_branch_coverage=1
00:36:28.409  		--rc genhtml_function_coverage=1
00:36:28.409  		--rc genhtml_legend=1
00:36:28.409  		--rc geninfo_all_blocks=1
00:36:28.409  		--rc geninfo_unexecuted_blocks=1
00:36:28.409  		
00:36:28.409  		'
00:36:28.409    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:36:28.409  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:28.409  		--rc genhtml_branch_coverage=1
00:36:28.409  		--rc genhtml_function_coverage=1
00:36:28.409  		--rc genhtml_legend=1
00:36:28.409  		--rc geninfo_all_blocks=1
00:36:28.409  		--rc geninfo_unexecuted_blocks=1
00:36:28.409  		
00:36:28.409  		'
00:36:28.409    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:36:28.409  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:28.409  		--rc genhtml_branch_coverage=1
00:36:28.409  		--rc genhtml_function_coverage=1
00:36:28.409  		--rc genhtml_legend=1
00:36:28.409  		--rc geninfo_all_blocks=1
00:36:28.409  		--rc geninfo_unexecuted_blocks=1
00:36:28.409  		
00:36:28.409  		'
00:36:28.409    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:36:28.409  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:28.409  		--rc genhtml_branch_coverage=1
00:36:28.409  		--rc genhtml_function_coverage=1
00:36:28.410  		--rc genhtml_legend=1
00:36:28.410  		--rc geninfo_all_blocks=1
00:36:28.410  		--rc geninfo_unexecuted_blocks=1
00:36:28.410  		
00:36:28.410  		'
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:36:28.410     00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:36:28.410     00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:36:28.410     00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob
00:36:28.410     00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:36:28.410     00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:36:28.410     00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:36:28.410      00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:28.410      00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:28.410      00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:28.410      00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH
00:36:28.410      00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:36:28.410    00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable
00:36:28.410   00:17:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=()
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=()
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=()
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=()
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=()
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=()
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=()
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:36:34.982  Found 0000:af:00.0 (0x8086 - 0x159b)
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:36:34.982  Found 0000:af:00.1 (0x8086 - 0x159b)
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:36:34.982  Found net devices under 0000:af:00.0: cvl_0_0
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:36:34.982  Found net devices under 0000:af:00.1: cvl_0_1
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:36:34.982   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:36:34.983  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:36:34.983  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms
00:36:34.983  
00:36:34.983  --- 10.0.0.2 ping statistics ---
00:36:34.983  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:36:34.983  rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:36:34.983  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:36:34.983  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms
00:36:34.983  
00:36:34.983  --- 10.0.0.1 ping statistics ---
00:36:34.983  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:36:34.983  rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3317395
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3317395
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3317395 ']'
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:36:34.983  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable
00:36:34.983   00:17:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:36:34.983  [2024-12-10 00:17:49.922925] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:36:34.983  [2024-12-10 00:17:49.923881] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:36:34.983  [2024-12-10 00:17:49.923913] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:36:34.983  [2024-12-10 00:17:50.010428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:36:34.983  [2024-12-10 00:17:50.061607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:36:34.983  [2024-12-10 00:17:50.061645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:36:34.983  [2024-12-10 00:17:50.061653] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:36:34.983  [2024-12-10 00:17:50.061659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:36:34.983  [2024-12-10 00:17:50.061664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:36:34.983  [2024-12-10 00:17:50.063001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:36:34.983  [2024-12-10 00:17:50.063108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:36:34.983  [2024-12-10 00:17:50.063125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:36:34.983  [2024-12-10 00:17:50.063129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:36:34.983  [2024-12-10 00:17:50.133191] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:36:34.983  [2024-12-10 00:17:50.133554] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:36:34.983  [2024-12-10 00:17:50.133834] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:36:34.983  [2024-12-10 00:17:50.134017] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:36:34.983  [2024-12-10 00:17:50.134161] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:36:34.983   00:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:36:34.983   00:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0
00:36:34.983   00:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:36:34.983   00:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable
00:36:34.983   00:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:36:34.983   00:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:36:34.983   00:17:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192
00:36:35.242  [2024-12-10 00:17:50.999958] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:36:35.242    00:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:36:35.501   00:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 '
00:36:35.501    00:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:36:35.759   00:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1
00:36:35.759    00:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:36:36.018   00:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 '
00:36:36.018    00:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:36:36.276   00:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3
00:36:36.276   00:17:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3'
00:36:36.276    00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:36:36.535   00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 '
00:36:36.535    00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:36:36.794   00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 '
00:36:36.794    00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512
00:36:37.053   00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6
00:36:37.053   00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6'
00:36:37.053   00:17:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:36:37.312   00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:36:37.312   00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:36:37.572   00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs
00:36:37.572   00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1
00:36:37.830   00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:36:37.830  [2024-12-10 00:17:53.627870] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:36:37.830   00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0
00:36:38.089   00:17:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0
00:36:38.347   00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:36:38.605   00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4
00:36:38.605   00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0
00:36:38.605   00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:36:38.605   00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]]
00:36:38.605   00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4
00:36:38.605   00:17:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2
00:36:40.507   00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:36:40.507    00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:36:40.507    00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:36:40.507   00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4
00:36:40.507   00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:36:40.507   00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0
00:36:40.507   00:17:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v
00:36:40.507  [global]
00:36:40.507  thread=1
00:36:40.507  invalidate=1
00:36:40.507  rw=write
00:36:40.507  time_based=1
00:36:40.507  runtime=1
00:36:40.507  ioengine=libaio
00:36:40.507  direct=1
00:36:40.507  bs=4096
00:36:40.507  iodepth=1
00:36:40.507  norandommap=0
00:36:40.507  numjobs=1
00:36:40.507  
00:36:40.507  verify_dump=1
00:36:40.507  verify_backlog=512
00:36:40.507  verify_state_save=0
00:36:40.507  do_verify=1
00:36:40.507  verify=crc32c-intel
00:36:40.791  [job0]
00:36:40.791  filename=/dev/nvme0n1
00:36:40.791  [job1]
00:36:40.791  filename=/dev/nvme0n2
00:36:40.791  [job2]
00:36:40.791  filename=/dev/nvme0n3
00:36:40.791  [job3]
00:36:40.791  filename=/dev/nvme0n4
00:36:40.791  Could not set queue depth (nvme0n1)
00:36:40.791  Could not set queue depth (nvme0n2)
00:36:40.791  Could not set queue depth (nvme0n3)
00:36:40.791  Could not set queue depth (nvme0n4)
00:36:41.051  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:36:41.051  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:36:41.051  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:36:41.051  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:36:41.051  fio-3.35
00:36:41.051  Starting 4 threads
00:36:42.430  
00:36:42.430  job0: (groupid=0, jobs=1): err= 0: pid=3318661: Tue Dec 10 00:17:57 2024
00:36:42.430    read: IOPS=2145, BW=8583KiB/s (8789kB/s)(8592KiB/1001msec)
00:36:42.430      slat (nsec): min=6644, max=39303, avg=7970.14, stdev=1522.06
00:36:42.430      clat (usec): min=191, max=456, avg=239.52, stdev=18.16
00:36:42.430       lat (usec): min=199, max=465, avg=247.49, stdev=18.10
00:36:42.430      clat percentiles (usec):
00:36:42.430       |  1.00th=[  204],  5.00th=[  212], 10.00th=[  217], 20.00th=[  223],
00:36:42.430       | 30.00th=[  231], 40.00th=[  237], 50.00th=[  241], 60.00th=[  245],
00:36:42.430       | 70.00th=[  249], 80.00th=[  253], 90.00th=[  260], 95.00th=[  269],
00:36:42.430       | 99.00th=[  281], 99.50th=[  289], 99.90th=[  302], 99.95th=[  429],
00:36:42.430       | 99.99th=[  457]
00:36:42.430    write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets
00:36:42.430      slat (nsec): min=9618, max=45241, avg=11263.05, stdev=2023.72
00:36:42.430      clat (usec): min=132, max=279, avg=166.23, stdev=13.32
00:36:42.430       lat (usec): min=142, max=324, avg=177.49, stdev=13.65
00:36:42.430      clat percentiles (usec):
00:36:42.430       |  1.00th=[  141],  5.00th=[  147], 10.00th=[  149], 20.00th=[  155],
00:36:42.430       | 30.00th=[  161], 40.00th=[  163], 50.00th=[  167], 60.00th=[  169],
00:36:42.430       | 70.00th=[  172], 80.00th=[  176], 90.00th=[  182], 95.00th=[  188],
00:36:42.430       | 99.00th=[  210], 99.50th=[  217], 99.90th=[  237], 99.95th=[  237],
00:36:42.430       | 99.99th=[  281]
00:36:42.430     bw (  KiB/s): min=11232, max=11232, per=46.62%, avg=11232.00, stdev= 0.00, samples=1
00:36:42.430     iops        : min= 2808, max= 2808, avg=2808.00, stdev= 0.00, samples=1
00:36:42.430    lat (usec)   : 250=87.91%, 500=12.09%
00:36:42.430    cpu          : usr=4.00%, sys=7.30%, ctx=4708, majf=0, minf=2
00:36:42.430    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:42.430       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:42.430       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:42.430       issued rwts: total=2148,2560,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:42.430       latency   : target=0, window=0, percentile=100.00%, depth=1
00:36:42.430  job1: (groupid=0, jobs=1): err= 0: pid=3318662: Tue Dec 10 00:17:57 2024
00:36:42.430    read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec)
00:36:42.430      slat (nsec): min=9822, max=26079, avg=24288.14, stdev=3305.43
00:36:42.430      clat (usec): min=40603, max=41150, avg=40944.87, stdev=109.60
00:36:42.430       lat (usec): min=40613, max=41176, avg=40969.16, stdev=111.89
00:36:42.430      clat percentiles (usec):
00:36:42.430       |  1.00th=[40633],  5.00th=[40633], 10.00th=[40633], 20.00th=[40633],
00:36:42.430       | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
00:36:42.430       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:36:42.430       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:36:42.430       | 99.99th=[41157]
00:36:42.430    write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets
00:36:42.430      slat (nsec): min=10779, max=48298, avg=13273.12, stdev=2543.00
00:36:42.430      clat (usec): min=137, max=281, avg=207.45, stdev=36.56
00:36:42.430       lat (usec): min=150, max=299, avg=220.72, stdev=36.91
00:36:42.430      clat percentiles (usec):
00:36:42.430       |  1.00th=[  143],  5.00th=[  153], 10.00th=[  161], 20.00th=[  169],
00:36:42.430       | 30.00th=[  174], 40.00th=[  186], 50.00th=[  225], 60.00th=[  237],
00:36:42.430       | 70.00th=[  239], 80.00th=[  241], 90.00th=[  243], 95.00th=[  249],
00:36:42.430       | 99.00th=[  265], 99.50th=[  273], 99.90th=[  281], 99.95th=[  281],
00:36:42.430       | 99.99th=[  281]
00:36:42.430     bw (  KiB/s): min= 4096, max= 4096, per=17.00%, avg=4096.00, stdev= 0.00, samples=1
00:36:42.430     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:36:42.430    lat (usec)   : 250=91.20%, 500=4.68%
00:36:42.430    lat (msec)   : 50=4.12%
00:36:42.430    cpu          : usr=0.89%, sys=0.59%, ctx=536, majf=0, minf=1
00:36:42.430    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:42.430       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:42.430       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:42.430       issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:42.430       latency   : target=0, window=0, percentile=100.00%, depth=1
00:36:42.430  job2: (groupid=0, jobs=1): err= 0: pid=3318663: Tue Dec 10 00:17:57 2024
00:36:42.430    read: IOPS=830, BW=3321KiB/s (3400kB/s)(3324KiB/1001msec)
00:36:42.430      slat (nsec): min=6203, max=26845, avg=7255.64, stdev=2236.56
00:36:42.430      clat (usec): min=204, max=41214, avg=972.07, stdev=5304.63
00:36:42.430       lat (usec): min=211, max=41222, avg=979.32, stdev=5305.18
00:36:42.430      clat percentiles (usec):
00:36:42.430       |  1.00th=[  208],  5.00th=[  212], 10.00th=[  215], 20.00th=[  217],
00:36:42.430       | 30.00th=[  221], 40.00th=[  223], 50.00th=[  225], 60.00th=[  229],
00:36:42.430       | 70.00th=[  233], 80.00th=[  249], 90.00th=[  310], 95.00th=[  326],
00:36:42.430       | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:36:42.430       | 99.99th=[41157]
00:36:42.430    write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets
00:36:42.430      slat (nsec): min=9012, max=51263, avg=10129.52, stdev=1713.91
00:36:42.430      clat (usec): min=133, max=3189, avg=167.47, stdev=95.62
00:36:42.430       lat (usec): min=143, max=3200, avg=177.60, stdev=95.72
00:36:42.430      clat percentiles (usec):
00:36:42.430       |  1.00th=[  139],  5.00th=[  143], 10.00th=[  149], 20.00th=[  153],
00:36:42.430       | 30.00th=[  157], 40.00th=[  161], 50.00th=[  163], 60.00th=[  167],
00:36:42.430       | 70.00th=[  172], 80.00th=[  176], 90.00th=[  182], 95.00th=[  190],
00:36:42.430       | 99.00th=[  200], 99.50th=[  206], 99.90th=[  289], 99.95th=[ 3195],
00:36:42.430       | 99.99th=[ 3195]
00:36:42.430     bw (  KiB/s): min= 4096, max= 4096, per=17.00%, avg=4096.00, stdev= 0.00, samples=1
00:36:42.430     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:36:42.430    lat (usec)   : 250=90.89%, 500=8.19%
00:36:42.430    lat (msec)   : 4=0.05%, 20=0.05%, 50=0.81%
00:36:42.430    cpu          : usr=0.60%, sys=2.00%, ctx=1855, majf=0, minf=3
00:36:42.430    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:42.430       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:42.430       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:42.430       issued rwts: total=831,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:42.430       latency   : target=0, window=0, percentile=100.00%, depth=1
00:36:42.430  job3: (groupid=0, jobs=1): err= 0: pid=3318664: Tue Dec 10 00:17:57 2024
00:36:42.430    read: IOPS=1627, BW=6510KiB/s (6666kB/s)(6640KiB/1020msec)
00:36:42.430      slat (nsec): min=6224, max=29587, avg=7224.05, stdev=1616.47
00:36:42.430      clat (usec): min=196, max=41220, avg=386.28, stdev=2439.53
00:36:42.430       lat (usec): min=203, max=41228, avg=393.50, stdev=2439.71
00:36:42.430      clat percentiles (usec):
00:36:42.430       |  1.00th=[  208],  5.00th=[  212], 10.00th=[  215], 20.00th=[  219],
00:36:42.430       | 30.00th=[  221], 40.00th=[  223], 50.00th=[  225], 60.00th=[  229],
00:36:42.430       | 70.00th=[  235], 80.00th=[  243], 90.00th=[  293], 95.00th=[  322],
00:36:42.430       | 99.00th=[  453], 99.50th=[  465], 99.90th=[41157], 99.95th=[41157],
00:36:42.430       | 99.99th=[41157]
00:36:42.430    write: IOPS=2007, BW=8031KiB/s (8224kB/s)(8192KiB/1020msec); 0 zone resets
00:36:42.430      slat (nsec): min=9068, max=38058, avg=10200.62, stdev=1400.57
00:36:42.430      clat (usec): min=131, max=262, avg=164.70, stdev=19.69
00:36:42.430       lat (usec): min=142, max=280, avg=174.90, stdev=19.85
00:36:42.430      clat percentiles (usec):
00:36:42.430       |  1.00th=[  141],  5.00th=[  145], 10.00th=[  147], 20.00th=[  151],
00:36:42.430       | 30.00th=[  153], 40.00th=[  157], 50.00th=[  161], 60.00th=[  163],
00:36:42.430       | 70.00th=[  167], 80.00th=[  174], 90.00th=[  188], 95.00th=[  208],
00:36:42.430       | 99.00th=[  241], 99.50th=[  251], 99.90th=[  262], 99.95th=[  265],
00:36:42.430       | 99.99th=[  265]
00:36:42.430     bw (  KiB/s): min= 5832, max=10552, per=34.00%, avg=8192.00, stdev=3337.54, samples=2
00:36:42.430     iops        : min= 1458, max= 2638, avg=2048.00, stdev=834.39, samples=2
00:36:42.430    lat (usec)   : 250=92.69%, 500=7.15%
00:36:42.430    lat (msec)   : 50=0.16%
00:36:42.430    cpu          : usr=1.37%, sys=3.83%, ctx=3708, majf=0, minf=1
00:36:42.430    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:42.430       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:42.430       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:42.431       issued rwts: total=1660,2048,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:42.431       latency   : target=0, window=0, percentile=100.00%, depth=1
00:36:42.431  
00:36:42.431  Run status group 0 (all jobs):
00:36:42.431     READ: bw=17.8MiB/s (18.7MB/s), 86.6KiB/s-8583KiB/s (88.7kB/s-8789kB/s), io=18.2MiB (19.1MB), run=1001-1020msec
00:36:42.431    WRITE: bw=23.5MiB/s (24.7MB/s), 2016KiB/s-9.99MiB/s (2064kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1020msec
00:36:42.431  
00:36:42.431  Disk stats (read/write):
00:36:42.431    nvme0n1: ios=1987/2048, merge=0/0, ticks=453/315, in_queue=768, util=86.97%
00:36:42.431    nvme0n2: ios=43/512, merge=0/0, ticks=1722/95, in_queue=1817, util=98.58%
00:36:42.431    nvme0n3: ios=512/588, merge=0/0, ticks=735/97, in_queue=832, util=88.96%
00:36:42.431    nvme0n4: ios=1642/2048, merge=0/0, ticks=468/323, in_queue=791, util=89.71%
00:36:42.431   00:17:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v
00:36:42.431  [global]
00:36:42.431  thread=1
00:36:42.431  invalidate=1
00:36:42.431  rw=randwrite
00:36:42.431  time_based=1
00:36:42.431  runtime=1
00:36:42.431  ioengine=libaio
00:36:42.431  direct=1
00:36:42.431  bs=4096
00:36:42.431  iodepth=1
00:36:42.431  norandommap=0
00:36:42.431  numjobs=1
00:36:42.431  
00:36:42.431  verify_dump=1
00:36:42.431  verify_backlog=512
00:36:42.431  verify_state_save=0
00:36:42.431  do_verify=1
00:36:42.431  verify=crc32c-intel
00:36:42.431  [job0]
00:36:42.431  filename=/dev/nvme0n1
00:36:42.431  [job1]
00:36:42.431  filename=/dev/nvme0n2
00:36:42.431  [job2]
00:36:42.431  filename=/dev/nvme0n3
00:36:42.431  [job3]
00:36:42.431  filename=/dev/nvme0n4
00:36:42.431  Could not set queue depth (nvme0n1)
00:36:42.431  Could not set queue depth (nvme0n2)
00:36:42.431  Could not set queue depth (nvme0n3)
00:36:42.431  Could not set queue depth (nvme0n4)
00:36:42.431  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:36:42.431  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:36:42.431  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:36:42.431  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:36:42.431  fio-3.35
00:36:42.431  Starting 4 threads
00:36:43.801  
00:36:43.801  job0: (groupid=0, jobs=1): err= 0: pid=3319028: Tue Dec 10 00:17:59 2024
00:36:43.801    read: IOPS=2274, BW=9099KiB/s (9317kB/s)(9108KiB/1001msec)
00:36:43.801      slat (nsec): min=6211, max=35392, avg=7040.35, stdev=1100.18
00:36:43.801      clat (usec): min=185, max=514, avg=245.50, stdev=32.34
00:36:43.801       lat (usec): min=192, max=521, avg=252.54, stdev=32.43
00:36:43.801      clat percentiles (usec):
00:36:43.801       |  1.00th=[  202],  5.00th=[  208], 10.00th=[  212], 20.00th=[  223],
00:36:43.801       | 30.00th=[  237], 40.00th=[  243], 50.00th=[  245], 60.00th=[  249],
00:36:43.801       | 70.00th=[  251], 80.00th=[  253], 90.00th=[  265], 95.00th=[  297],
00:36:43.801       | 99.00th=[  396], 99.50th=[  474], 99.90th=[  502], 99.95th=[  510],
00:36:43.801       | 99.99th=[  515]
00:36:43.801    write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets
00:36:43.801      slat (nsec): min=8726, max=68142, avg=9750.80, stdev=1523.73
00:36:43.801      clat (usec): min=116, max=344, avg=152.26, stdev=25.11
00:36:43.801       lat (usec): min=125, max=412, avg=162.01, stdev=25.41
00:36:43.801      clat percentiles (usec):
00:36:43.801       |  1.00th=[  123],  5.00th=[  127], 10.00th=[  130], 20.00th=[  135],
00:36:43.801       | 30.00th=[  137], 40.00th=[  141], 50.00th=[  145], 60.00th=[  149],
00:36:43.801       | 70.00th=[  159], 80.00th=[  169], 90.00th=[  184], 95.00th=[  200],
00:36:43.801       | 99.00th=[  243], 99.50th=[  253], 99.90th=[  265], 99.95th=[  265],
00:36:43.801       | 99.99th=[  347]
00:36:43.801     bw (  KiB/s): min=11040, max=11040, per=42.34%, avg=11040.00, stdev= 0.00, samples=1
00:36:43.801     iops        : min= 2760, max= 2760, avg=2760.00, stdev= 0.00, samples=1
00:36:43.801    lat (usec)   : 250=85.32%, 500=14.62%, 750=0.06%
00:36:43.801    cpu          : usr=2.00%, sys=4.50%, ctx=4839, majf=0, minf=1
00:36:43.801    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:43.801       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:43.801       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:43.801       issued rwts: total=2277,2560,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:43.801       latency   : target=0, window=0, percentile=100.00%, depth=1
00:36:43.801  job1: (groupid=0, jobs=1): err= 0: pid=3319034: Tue Dec 10 00:17:59 2024
00:36:43.801    read: IOPS=2269, BW=9079KiB/s (9297kB/s)(9088KiB/1001msec)
00:36:43.801      slat (nsec): min=6357, max=18538, avg=7332.11, stdev=1036.72
00:36:43.801      clat (usec): min=181, max=510, avg=246.17, stdev=34.14
00:36:43.801       lat (usec): min=188, max=518, avg=253.50, stdev=34.25
00:36:43.801      clat percentiles (usec):
00:36:43.801       |  1.00th=[  196],  5.00th=[  208], 10.00th=[  212], 20.00th=[  221],
00:36:43.801       | 30.00th=[  239], 40.00th=[  245], 50.00th=[  247], 60.00th=[  249],
00:36:43.801       | 70.00th=[  251], 80.00th=[  255], 90.00th=[  269], 95.00th=[  293],
00:36:43.801       | 99.00th=[  453], 99.50th=[  482], 99.90th=[  502], 99.95th=[  506],
00:36:43.801       | 99.99th=[  510]
00:36:43.801    write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets
00:36:43.801      slat (nsec): min=9363, max=46349, avg=10776.86, stdev=1443.07
00:36:43.801      clat (usec): min=119, max=313, avg=150.20, stdev=23.30
00:36:43.801       lat (usec): min=129, max=359, avg=160.98, stdev=23.60
00:36:43.801      clat percentiles (usec):
00:36:43.801       |  1.00th=[  124],  5.00th=[  127], 10.00th=[  130], 20.00th=[  133],
00:36:43.801       | 30.00th=[  137], 40.00th=[  139], 50.00th=[  143], 60.00th=[  149],
00:36:43.801       | 70.00th=[  157], 80.00th=[  167], 90.00th=[  178], 95.00th=[  190],
00:36:43.801       | 99.00th=[  243], 99.50th=[  245], 99.90th=[  249], 99.95th=[  260],
00:36:43.801       | 99.99th=[  314]
00:36:43.801     bw (  KiB/s): min=10888, max=10888, per=41.75%, avg=10888.00, stdev= 0.00, samples=1
00:36:43.801     iops        : min= 2722, max= 2722, avg=2722.00, stdev= 0.00, samples=1
00:36:43.801    lat (usec)   : 250=84.60%, 500=15.34%, 750=0.06%
00:36:43.801    cpu          : usr=2.30%, sys=4.70%, ctx=4833, majf=0, minf=1
00:36:43.801    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:43.801       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:43.801       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:43.801       issued rwts: total=2272,2560,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:43.801       latency   : target=0, window=0, percentile=100.00%, depth=1
00:36:43.801  job2: (groupid=0, jobs=1): err= 0: pid=3319041: Tue Dec 10 00:17:59 2024
00:36:43.801    read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec)
00:36:43.801      slat (nsec): min=10626, max=26211, avg=22931.36, stdev=2843.53
00:36:43.801      clat (usec): min=40487, max=41047, avg=40946.99, stdev=109.77
00:36:43.801       lat (usec): min=40498, max=41070, avg=40969.92, stdev=112.32
00:36:43.801      clat percentiles (usec):
00:36:43.801       |  1.00th=[40633],  5.00th=[40633], 10.00th=[41157], 20.00th=[41157],
00:36:43.801       | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
00:36:43.802       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:36:43.802       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:36:43.802       | 99.99th=[41157]
00:36:43.802    write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets
00:36:43.802      slat (nsec): min=10269, max=40735, avg=12007.29, stdev=2463.85
00:36:43.802      clat (usec): min=140, max=273, avg=216.45, stdev=30.35
00:36:43.802       lat (usec): min=152, max=301, avg=228.46, stdev=30.49
00:36:43.802      clat percentiles (usec):
00:36:43.802       |  1.00th=[  151],  5.00th=[  161], 10.00th=[  169], 20.00th=[  184],
00:36:43.802       | 30.00th=[  198], 40.00th=[  212], 50.00th=[  229], 60.00th=[  235],
00:36:43.802       | 70.00th=[  239], 80.00th=[  243], 90.00th=[  249], 95.00th=[  253],
00:36:43.802       | 99.00th=[  260], 99.50th=[  265], 99.90th=[  273], 99.95th=[  273],
00:36:43.802       | 99.99th=[  273]
00:36:43.802     bw (  KiB/s): min= 4096, max= 4096, per=15.71%, avg=4096.00, stdev= 0.00, samples=1
00:36:43.802     iops        : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1
00:36:43.802    lat (usec)   : 250=87.64%, 500=8.24%
00:36:43.802    lat (msec)   : 50=4.12%
00:36:43.802    cpu          : usr=0.49%, sys=0.88%, ctx=536, majf=0, minf=1
00:36:43.802    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:43.802       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:43.802       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:43.802       issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:43.802       latency   : target=0, window=0, percentile=100.00%, depth=1
00:36:43.802  job3: (groupid=0, jobs=1): err= 0: pid=3319045: Tue Dec 10 00:17:59 2024
00:36:43.802    read: IOPS=557, BW=2230KiB/s (2283kB/s)(2272KiB/1019msec)
00:36:43.802      slat (nsec): min=7393, max=43756, avg=8878.01, stdev=3055.43
00:36:43.802      clat (usec): min=214, max=40995, avg=1391.89, stdev=6742.69
00:36:43.802       lat (usec): min=221, max=41016, avg=1400.77, stdev=6744.81
00:36:43.802      clat percentiles (usec):
00:36:43.802       |  1.00th=[  223],  5.00th=[  231], 10.00th=[  235], 20.00th=[  239],
00:36:43.802       | 30.00th=[  241], 40.00th=[  243], 50.00th=[  245], 60.00th=[  247],
00:36:43.802       | 70.00th=[  249], 80.00th=[  251], 90.00th=[  253], 95.00th=[  260],
00:36:43.802       | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:36:43.802       | 99.99th=[41157]
00:36:43.802    write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets
00:36:43.802      slat (nsec): min=10202, max=39832, avg=11783.58, stdev=2257.33
00:36:43.802      clat (usec): min=146, max=3628, avg=200.57, stdev=113.54
00:36:43.802       lat (usec): min=157, max=3639, avg=212.35, stdev=113.69
00:36:43.802      clat percentiles (usec):
00:36:43.802       |  1.00th=[  153],  5.00th=[  157], 10.00th=[  161], 20.00th=[  165],
00:36:43.802       | 30.00th=[  169], 40.00th=[  174], 50.00th=[  180], 60.00th=[  194],
00:36:43.802       | 70.00th=[  237], 80.00th=[  241], 90.00th=[  245], 95.00th=[  251],
00:36:43.802       | 99.00th=[  265], 99.50th=[  273], 99.90th=[  490], 99.95th=[ 3621],
00:36:43.802       | 99.99th=[ 3621]
00:36:43.802     bw (  KiB/s): min= 8192, max= 8192, per=31.42%, avg=8192.00, stdev= 0.00, samples=1
00:36:43.802     iops        : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1
00:36:43.802    lat (usec)   : 250=89.51%, 500=9.42%
00:36:43.802    lat (msec)   : 4=0.06%, 50=1.01%
00:36:43.802    cpu          : usr=1.18%, sys=2.65%, ctx=1592, majf=0, minf=1
00:36:43.802    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:43.802       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:43.802       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:43.802       issued rwts: total=568,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:43.802       latency   : target=0, window=0, percentile=100.00%, depth=1
00:36:43.802  
00:36:43.802  Run status group 0 (all jobs):
00:36:43.802     READ: bw=19.7MiB/s (20.6MB/s), 86.2KiB/s-9099KiB/s (88.3kB/s-9317kB/s), io=20.1MiB (21.0MB), run=1001-1021msec
00:36:43.802    WRITE: bw=25.5MiB/s (26.7MB/s), 2006KiB/s-9.99MiB/s (2054kB/s-10.5MB/s), io=26.0MiB (27.3MB), run=1001-1021msec
00:36:43.802  
00:36:43.802  Disk stats (read/write):
00:36:43.802    nvme0n1: ios=2028/2048, merge=0/0, ticks=539/313, in_queue=852, util=87.07%
00:36:43.802    nvme0n2: ios=2010/2048, merge=0/0, ticks=1460/311, in_queue=1771, util=97.97%
00:36:43.802    nvme0n3: ios=40/512, merge=0/0, ticks=1043/102, in_queue=1145, util=98.85%
00:36:43.802    nvme0n4: ios=563/1024, merge=0/0, ticks=585/191, in_queue=776, util=89.68%
00:36:43.802   00:17:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v
00:36:43.802  [global]
00:36:43.802  thread=1
00:36:43.802  invalidate=1
00:36:43.802  rw=write
00:36:43.802  time_based=1
00:36:43.802  runtime=1
00:36:43.802  ioengine=libaio
00:36:43.802  direct=1
00:36:43.802  bs=4096
00:36:43.802  iodepth=128
00:36:43.802  norandommap=0
00:36:43.802  numjobs=1
00:36:43.802  
00:36:43.802  verify_dump=1
00:36:43.802  verify_backlog=512
00:36:43.802  verify_state_save=0
00:36:43.802  do_verify=1
00:36:43.802  verify=crc32c-intel
00:36:43.802  [job0]
00:36:43.802  filename=/dev/nvme0n1
00:36:43.802  [job1]
00:36:43.802  filename=/dev/nvme0n2
00:36:43.802  [job2]
00:36:43.802  filename=/dev/nvme0n3
00:36:43.802  [job3]
00:36:43.802  filename=/dev/nvme0n4
00:36:43.802  Could not set queue depth (nvme0n1)
00:36:43.802  Could not set queue depth (nvme0n2)
00:36:43.802  Could not set queue depth (nvme0n3)
00:36:43.802  Could not set queue depth (nvme0n4)
00:36:44.060  job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:36:44.060  job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:36:44.060  job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:36:44.060  job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:36:44.060  fio-3.35
00:36:44.060  Starting 4 threads
00:36:45.430  
00:36:45.430  job0: (groupid=0, jobs=1): err= 0: pid=3319450: Tue Dec 10 00:18:01 2024
00:36:45.430    read: IOPS=2803, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1003msec)
00:36:45.430      slat (nsec): min=1661, max=14751k, avg=142910.96, stdev=967830.47
00:36:45.430      clat (usec): min=2023, max=56301, avg=16915.58, stdev=11808.93
00:36:45.430       lat (usec): min=2778, max=56305, avg=17058.49, stdev=11889.22
00:36:45.430      clat percentiles (usec):
00:36:45.430       |  1.00th=[ 5014],  5.00th=[ 6390], 10.00th=[ 7963], 20.00th=[ 8586],
00:36:45.430       | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[11207], 60.00th=[14746],
00:36:45.430       | 70.00th=[19268], 80.00th=[27657], 90.00th=[33424], 95.00th=[42206],
00:36:45.430       | 99.00th=[54264], 99.50th=[55313], 99.90th=[56361], 99.95th=[56361],
00:36:45.430       | 99.99th=[56361]
00:36:45.430    write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets
00:36:45.430      slat (usec): min=2, max=16615, avg=173.88, stdev=875.82
00:36:45.430      clat (usec): min=1499, max=78599, avg=25954.53, stdev=18800.70
00:36:45.430       lat (usec): min=1513, max=78608, avg=26128.42, stdev=18919.98
00:36:45.430      clat percentiles (usec):
00:36:45.430       |  1.00th=[ 4293],  5.00th=[ 6259], 10.00th=[ 7963], 20.00th=[ 9896],
00:36:45.430       | 30.00th=[12256], 40.00th=[16712], 50.00th=[16909], 60.00th=[18482],
00:36:45.430       | 70.00th=[39584], 80.00th=[46924], 90.00th=[52691], 95.00th=[60556],
00:36:45.430       | 99.00th=[76022], 99.50th=[77071], 99.90th=[78119], 99.95th=[78119],
00:36:45.430       | 99.99th=[78119]
00:36:45.430     bw (  KiB/s): min=10032, max=14544, per=19.96%, avg=12288.00, stdev=3190.47, samples=2
00:36:45.430     iops        : min= 2508, max= 3636, avg=3072.00, stdev=797.62, samples=2
00:36:45.430    lat (msec)   : 2=0.17%, 4=0.42%, 10=31.75%, 20=32.68%, 50=26.99%
00:36:45.430    lat (msec)   : 100=7.99%
00:36:45.430    cpu          : usr=2.59%, sys=4.09%, ctx=333, majf=0, minf=1
00:36:45.430    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9%
00:36:45.430       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:45.430       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:36:45.430       issued rwts: total=2812,3072,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:45.430       latency   : target=0, window=0, percentile=100.00%, depth=128
00:36:45.430  job1: (groupid=0, jobs=1): err= 0: pid=3319464: Tue Dec 10 00:18:01 2024
00:36:45.430    read: IOPS=2903, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1006msec)
00:36:45.430      slat (nsec): min=1279, max=10034k, avg=124705.21, stdev=721061.51
00:36:45.430      clat (usec): min=506, max=57263, avg=16061.08, stdev=9434.09
00:36:45.430       lat (usec): min=2652, max=58073, avg=16185.78, stdev=9517.31
00:36:45.430      clat percentiles (usec):
00:36:45.430       |  1.00th=[ 3589],  5.00th=[ 6063], 10.00th=[ 9110], 20.00th=[10290],
00:36:45.430       | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[13304],
00:36:45.430       | 70.00th=[19268], 80.00th=[23462], 90.00th=[28705], 95.00th=[32637],
00:36:45.430       | 99.00th=[51643], 99.50th=[53216], 99.90th=[57410], 99.95th=[57410],
00:36:45.430       | 99.99th=[57410]
00:36:45.430    write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets
00:36:45.430      slat (usec): min=2, max=26581, avg=200.19, stdev=1048.41
00:36:45.430      clat (usec): min=839, max=80645, avg=25083.20, stdev=16368.66
00:36:45.430       lat (usec): min=852, max=80656, avg=25283.39, stdev=16484.43
00:36:45.430      clat percentiles (usec):
00:36:45.430       |  1.00th=[ 5866],  5.00th=[10028], 10.00th=[10814], 20.00th=[10945],
00:36:45.430       | 30.00th=[14877], 40.00th=[17695], 50.00th=[18482], 60.00th=[20579],
00:36:45.430       | 70.00th=[27657], 80.00th=[42206], 90.00th=[49021], 95.00th=[54264],
00:36:45.430       | 99.00th=[73925], 99.50th=[77071], 99.90th=[80217], 99.95th=[80217],
00:36:45.430       | 99.99th=[80217]
00:36:45.430     bw (  KiB/s): min= 8192, max=16384, per=19.96%, avg=12288.00, stdev=5792.62, samples=2
00:36:45.430     iops        : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2
00:36:45.430    lat (usec)   : 750=0.02%, 1000=0.10%
00:36:45.430    lat (msec)   : 4=1.07%, 10=9.38%, 20=53.93%, 50=30.29%, 100=5.22%
00:36:45.430    cpu          : usr=2.59%, sys=3.68%, ctx=340, majf=0, minf=1
00:36:45.430    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9%
00:36:45.430       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:45.430       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:36:45.430       issued rwts: total=2921,3072,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:45.430       latency   : target=0, window=0, percentile=100.00%, depth=128
00:36:45.430  job2: (groupid=0, jobs=1): err= 0: pid=3319480: Tue Dec 10 00:18:01 2024
00:36:45.430    read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec)
00:36:45.430      slat (nsec): min=1337, max=28744k, avg=92494.87, stdev=867794.97
00:36:45.430      clat (usec): min=3439, max=56501, avg=11979.07, stdev=6178.86
00:36:45.430       lat (usec): min=3447, max=56510, avg=12071.56, stdev=6240.66
00:36:45.430      clat percentiles (usec):
00:36:45.430       |  1.00th=[ 6194],  5.00th=[ 7111], 10.00th=[ 7439], 20.00th=[ 8291],
00:36:45.430       | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10159],
00:36:45.430       | 70.00th=[11600], 80.00th=[14353], 90.00th=[20055], 95.00th=[27657],
00:36:45.430       | 99.00th=[33817], 99.50th=[34341], 99.90th=[56361], 99.95th=[56361],
00:36:45.430       | 99.99th=[56361]
00:36:45.430    write: IOPS=5238, BW=20.5MiB/s (21.5MB/s)(20.6MiB/1009msec); 0 zone resets
00:36:45.430      slat (usec): min=2, max=14737, avg=92.79, stdev=627.39
00:36:45.430      clat (usec): min=1578, max=50894, avg=12560.83, stdev=8294.59
00:36:45.430       lat (usec): min=1591, max=50904, avg=12653.62, stdev=8348.16
00:36:45.430      clat percentiles (usec):
00:36:45.430       |  1.00th=[ 4424],  5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 7439],
00:36:45.430       | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[11338],
00:36:45.430       | 70.00th=[12649], 80.00th=[16909], 90.00th=[19530], 95.00th=[31589],
00:36:45.430       | 99.00th=[45876], 99.50th=[48497], 99.90th=[50070], 99.95th=[50070],
00:36:45.430       | 99.99th=[51119]
00:36:45.430     bw (  KiB/s): min=16136, max=25128, per=33.52%, avg=20632.00, stdev=6358.30, samples=2
00:36:45.430     iops        : min= 4034, max= 6282, avg=5158.00, stdev=1589.58, samples=2
00:36:45.430    lat (msec)   : 2=0.02%, 4=0.51%, 10=55.68%, 20=34.33%, 50=9.30%
00:36:45.430    lat (msec)   : 100=0.16%
00:36:45.430    cpu          : usr=4.17%, sys=6.45%, ctx=401, majf=0, minf=1
00:36:45.430    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4%
00:36:45.430       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:45.430       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:36:45.430       issued rwts: total=5120,5286,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:45.430       latency   : target=0, window=0, percentile=100.00%, depth=128
00:36:45.430  job3: (groupid=0, jobs=1): err= 0: pid=3319485: Tue Dec 10 00:18:01 2024
00:36:45.430    read: IOPS=3723, BW=14.5MiB/s (15.2MB/s)(14.6MiB/1005msec)
00:36:45.430      slat (nsec): min=1388, max=30632k, avg=115337.72, stdev=935052.69
00:36:45.430      clat (usec): min=944, max=44525, avg=15402.23, stdev=5566.53
00:36:45.431       lat (usec): min=5409, max=44551, avg=15517.57, stdev=5599.38
00:36:45.431      clat percentiles (usec):
00:36:45.431       |  1.00th=[ 6063],  5.00th=[10159], 10.00th=[10683], 20.00th=[11207],
00:36:45.431       | 30.00th=[12125], 40.00th=[12518], 50.00th=[14091], 60.00th=[15664],
00:36:45.431       | 70.00th=[16581], 80.00th=[18220], 90.00th=[21890], 95.00th=[30540],
00:36:45.431       | 99.00th=[32113], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914],
00:36:45.431       | 99.99th=[44303]
00:36:45.431    write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets
00:36:45.431      slat (usec): min=2, max=29790, avg=125.61, stdev=867.56
00:36:45.431      clat (usec): min=1027, max=48017, avg=16738.76, stdev=8134.19
00:36:45.431       lat (usec): min=1039, max=52101, avg=16864.37, stdev=8198.10
00:36:45.431      clat percentiles (usec):
00:36:45.431       |  1.00th=[ 4948],  5.00th=[ 6980], 10.00th=[ 9241], 20.00th=[11600],
00:36:45.431       | 30.00th=[12518], 40.00th=[12780], 50.00th=[13829], 60.00th=[15795],
00:36:45.431       | 70.00th=[17171], 80.00th=[23987], 90.00th=[28967], 95.00th=[29492],
00:36:45.431       | 99.00th=[47973], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973],
00:36:45.431       | 99.99th=[47973]
00:36:45.431     bw (  KiB/s): min=16384, max=16384, per=26.62%, avg=16384.00, stdev= 0.00, samples=2
00:36:45.431     iops        : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2
00:36:45.431    lat (usec)   : 1000=0.01%
00:36:45.431    lat (msec)   : 2=0.14%, 4=0.10%, 10=9.80%, 20=70.59%, 50=19.35%
00:36:45.431    cpu          : usr=2.39%, sys=4.68%, ctx=284, majf=0, minf=1
00:36:45.431    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
00:36:45.431       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:45.431       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:36:45.431       issued rwts: total=3742,4096,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:45.431       latency   : target=0, window=0, percentile=100.00%, depth=128
00:36:45.431  
00:36:45.431  Run status group 0 (all jobs):
00:36:45.431     READ: bw=56.5MiB/s (59.2MB/s), 11.0MiB/s-19.8MiB/s (11.5MB/s-20.8MB/s), io=57.0MiB (59.8MB), run=1003-1009msec
00:36:45.431    WRITE: bw=60.1MiB/s (63.0MB/s), 11.9MiB/s-20.5MiB/s (12.5MB/s-21.5MB/s), io=60.6MiB (63.6MB), run=1003-1009msec
00:36:45.431  
00:36:45.431  Disk stats (read/write):
00:36:45.431    nvme0n1: ios=2098/2560, merge=0/0, ticks=32895/69134, in_queue=102029, util=86.47%
00:36:45.431    nvme0n2: ios=2099/2368, merge=0/0, ticks=11991/28186, in_queue=40177, util=98.17%
00:36:45.431    nvme0n3: ios=4631/4623, merge=0/0, ticks=54368/49944, in_queue=104312, util=98.12%
00:36:45.431    nvme0n4: ios=3433/3584, merge=0/0, ticks=33324/29066, in_queue=62390, util=96.22%
00:36:45.431   00:18:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v
00:36:45.431  [global]
00:36:45.431  thread=1
00:36:45.431  invalidate=1
00:36:45.431  rw=randwrite
00:36:45.431  time_based=1
00:36:45.431  runtime=1
00:36:45.431  ioengine=libaio
00:36:45.431  direct=1
00:36:45.431  bs=4096
00:36:45.431  iodepth=128
00:36:45.431  norandommap=0
00:36:45.431  numjobs=1
00:36:45.431  
00:36:45.431  verify_dump=1
00:36:45.431  verify_backlog=512
00:36:45.431  verify_state_save=0
00:36:45.431  do_verify=1
00:36:45.431  verify=crc32c-intel
00:36:45.431  [job0]
00:36:45.431  filename=/dev/nvme0n1
00:36:45.431  [job1]
00:36:45.431  filename=/dev/nvme0n2
00:36:45.431  [job2]
00:36:45.431  filename=/dev/nvme0n3
00:36:45.431  [job3]
00:36:45.431  filename=/dev/nvme0n4
00:36:45.431  Could not set queue depth (nvme0n1)
00:36:45.431  Could not set queue depth (nvme0n2)
00:36:45.431  Could not set queue depth (nvme0n3)
00:36:45.431  Could not set queue depth (nvme0n4)
00:36:45.688  job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:36:45.688  job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:36:45.688  job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:36:45.688  job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
00:36:45.688  fio-3.35
00:36:45.688  Starting 4 threads
00:36:47.128  
00:36:47.128  job0: (groupid=0, jobs=1): err= 0: pid=3319961: Tue Dec 10 00:18:02 2024
00:36:47.128    read: IOPS=3606, BW=14.1MiB/s (14.8MB/s)(14.3MiB/1013msec)
00:36:47.128      slat (nsec): min=1285, max=14048k, avg=117653.42, stdev=931433.94
00:36:47.128      clat (usec): min=2039, max=59710, avg=15531.52, stdev=5145.51
00:36:47.128       lat (usec): min=2044, max=63913, avg=15649.17, stdev=5214.47
00:36:47.128      clat percentiles (usec):
00:36:47.128       |  1.00th=[ 5473],  5.00th=[ 9896], 10.00th=[10945], 20.00th=[11994],
00:36:47.128       | 30.00th=[12649], 40.00th=[13960], 50.00th=[14615], 60.00th=[15664],
00:36:47.128       | 70.00th=[16909], 80.00th=[18482], 90.00th=[21365], 95.00th=[23462],
00:36:47.128       | 99.00th=[30802], 99.50th=[38011], 99.90th=[58983], 99.95th=[58983],
00:36:47.128       | 99.99th=[59507]
00:36:47.128    write: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec); 0 zone resets
00:36:47.128      slat (nsec): min=1842, max=13783k, avg=121206.67, stdev=855474.16
00:36:47.128      clat (usec): min=2777, max=55578, avg=17436.58, stdev=9932.45
00:36:47.128       lat (usec): min=2786, max=55585, avg=17557.79, stdev=10016.53
00:36:47.128      clat percentiles (usec):
00:36:47.128       |  1.00th=[ 6456],  5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10945],
00:36:47.128       | 30.00th=[12387], 40.00th=[13042], 50.00th=[14222], 60.00th=[15664],
00:36:47.128       | 70.00th=[18482], 80.00th=[19268], 90.00th=[34341], 95.00th=[43779],
00:36:47.128       | 99.00th=[48497], 99.50th=[50070], 99.90th=[55313], 99.95th=[55313],
00:36:47.128       | 99.99th=[55837]
00:36:47.128     bw (  KiB/s): min=15920, max=16351, per=22.03%, avg=16135.50, stdev=304.76, samples=2
00:36:47.128     iops        : min= 3980, max= 4087, avg=4033.50, stdev=75.66, samples=2
00:36:47.128    lat (msec)   : 4=0.21%, 10=9.10%, 20=73.83%, 50=16.39%, 100=0.48%
00:36:47.128    cpu          : usr=3.95%, sys=3.95%, ctx=224, majf=0, minf=1
00:36:47.128    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2%
00:36:47.128       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:47.128       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:36:47.128       issued rwts: total=3653,4096,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:47.128       latency   : target=0, window=0, percentile=100.00%, depth=128
00:36:47.128  job1: (groupid=0, jobs=1): err= 0: pid=3319975: Tue Dec 10 00:18:02 2024
00:36:47.128    read: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec)
00:36:47.128      slat (usec): min=2, max=23538, avg=161.57, stdev=1183.99
00:36:47.128      clat (usec): min=7934, max=58922, avg=20886.71, stdev=10630.81
00:36:47.128       lat (usec): min=7941, max=58931, avg=21048.28, stdev=10736.36
00:36:47.128      clat percentiles (usec):
00:36:47.128       |  1.00th=[ 9765],  5.00th=[10552], 10.00th=[11731], 20.00th=[12649],
00:36:47.128       | 30.00th=[13042], 40.00th=[13566], 50.00th=[17171], 60.00th=[20055],
00:36:47.128       | 70.00th=[23725], 80.00th=[29754], 90.00th=[38011], 95.00th=[43779],
00:36:47.128       | 99.00th=[52691], 99.50th=[52691], 99.90th=[58983], 99.95th=[58983],
00:36:47.128       | 99.99th=[58983]
00:36:47.128    write: IOPS=3243, BW=12.7MiB/s (13.3MB/s)(12.8MiB/1007msec); 0 zone resets
00:36:47.128      slat (usec): min=3, max=19880, avg=148.34, stdev=1140.89
00:36:47.128      clat (usec): min=1554, max=57018, avg=19364.15, stdev=8588.90
00:36:47.128       lat (usec): min=6815, max=57027, avg=19512.49, stdev=8699.10
00:36:47.128      clat percentiles (usec):
00:36:47.128       |  1.00th=[ 7177],  5.00th=[10945], 10.00th=[11338], 20.00th=[11600],
00:36:47.128       | 30.00th=[12911], 40.00th=[14484], 50.00th=[18482], 60.00th=[18744],
00:36:47.128       | 70.00th=[21890], 80.00th=[27657], 90.00th=[32375], 95.00th=[34866],
00:36:47.128       | 99.00th=[44827], 99.50th=[44827], 99.90th=[56886], 99.95th=[56886],
00:36:47.128       | 99.99th=[56886]
00:36:47.128     bw (  KiB/s): min=10584, max=14520, per=17.13%, avg=12552.00, stdev=2783.17, samples=2
00:36:47.128     iops        : min= 2646, max= 3630, avg=3138.00, stdev=695.79, samples=2
00:36:47.128    lat (msec)   : 2=0.02%, 10=2.82%, 20=59.42%, 50=36.57%, 100=1.17%
00:36:47.128    cpu          : usr=2.58%, sys=5.17%, ctx=147, majf=0, minf=2
00:36:47.128    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0%
00:36:47.128       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:47.128       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:36:47.128       issued rwts: total=3072,3266,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:47.128       latency   : target=0, window=0, percentile=100.00%, depth=128
00:36:47.128  job2: (groupid=0, jobs=1): err= 0: pid=3319994: Tue Dec 10 00:18:02 2024
00:36:47.128    read: IOPS=5195, BW=20.3MiB/s (21.3MB/s)(20.5MiB/1010msec)
00:36:47.128      slat (nsec): min=1104, max=11282k, avg=96551.60, stdev=729429.50
00:36:47.128      clat (usec): min=1267, max=57233, avg=12446.71, stdev=3451.23
00:36:47.128       lat (usec): min=4053, max=57238, avg=12543.26, stdev=3496.31
00:36:47.128      clat percentiles (usec):
00:36:47.128       |  1.00th=[ 5604],  5.00th=[ 8291], 10.00th=[ 9372], 20.00th=[10552],
00:36:47.128       | 30.00th=[10945], 40.00th=[11338], 50.00th=[11731], 60.00th=[12125],
00:36:47.128       | 70.00th=[13042], 80.00th=[14222], 90.00th=[16581], 95.00th=[19006],
00:36:47.128       | 99.00th=[21365], 99.50th=[21890], 99.90th=[55837], 99.95th=[55837],
00:36:47.128       | 99.99th=[57410]
00:36:47.128    write: IOPS=5576, BW=21.8MiB/s (22.8MB/s)(22.0MiB/1010msec); 0 zone resets
00:36:47.128      slat (usec): min=2, max=9660, avg=82.44, stdev=565.76
00:36:47.128      clat (usec): min=4360, max=25066, avg=11028.42, stdev=2317.86
00:36:47.128       lat (usec): min=4376, max=25074, avg=11110.85, stdev=2349.28
00:36:47.128      clat percentiles (usec):
00:36:47.128       |  1.00th=[ 6128],  5.00th=[ 7177], 10.00th=[ 7701], 20.00th=[ 9372],
00:36:47.128       | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11600],
00:36:47.128       | 70.00th=[11863], 80.00th=[12125], 90.00th=[12780], 95.00th=[15139],
00:36:47.128       | 99.00th=[18744], 99.50th=[20055], 99.90th=[25035], 99.95th=[25035],
00:36:47.128       | 99.99th=[25035]
00:36:47.128     bw (  KiB/s): min=21888, max=23160, per=30.75%, avg=22524.00, stdev=899.44, samples=2
00:36:47.128     iops        : min= 5472, max= 5790, avg=5631.00, stdev=224.86, samples=2
00:36:47.128    lat (msec)   : 2=0.01%, 10=18.71%, 20=79.58%, 50=1.63%, 100=0.07%
00:36:47.128    cpu          : usr=4.06%, sys=6.54%, ctx=415, majf=0, minf=1
00:36:47.128    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4%
00:36:47.128       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:47.128       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:36:47.128       issued rwts: total=5247,5632,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:47.128       latency   : target=0, window=0, percentile=100.00%, depth=128
00:36:47.128  job3: (groupid=0, jobs=1): err= 0: pid=3320000: Tue Dec 10 00:18:02 2024
00:36:47.128    read: IOPS=5064, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1011msec)
00:36:47.128      slat (nsec): min=1433, max=12572k, avg=99672.24, stdev=804184.94
00:36:47.128      clat (usec): min=7320, max=27181, avg=12630.00, stdev=3324.86
00:36:47.128       lat (usec): min=7332, max=28278, avg=12729.67, stdev=3400.17
00:36:47.128      clat percentiles (usec):
00:36:47.128       |  1.00th=[ 8094],  5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10290],
00:36:47.128       | 30.00th=[10683], 40.00th=[11076], 50.00th=[11338], 60.00th=[11994],
00:36:47.128       | 70.00th=[12911], 80.00th=[15139], 90.00th=[18220], 95.00th=[19530],
00:36:47.128       | 99.00th=[21890], 99.50th=[24773], 99.90th=[25822], 99.95th=[25822],
00:36:47.128       | 99.99th=[27132]
00:36:47.128    write: IOPS=5497, BW=21.5MiB/s (22.5MB/s)(21.7MiB/1011msec); 0 zone resets
00:36:47.128      slat (usec): min=2, max=11723, avg=81.54, stdev=592.51
00:36:47.128      clat (usec): min=2139, max=25716, avg=11331.25, stdev=2951.75
00:36:47.128       lat (usec): min=2148, max=25719, avg=11412.79, stdev=2978.02
00:36:47.128      clat percentiles (usec):
00:36:47.128       |  1.00th=[ 4883],  5.00th=[ 6915], 10.00th=[ 7767], 20.00th=[ 8848],
00:36:47.128       | 30.00th=[10028], 40.00th=[10814], 50.00th=[11207], 60.00th=[11600],
00:36:47.128       | 70.00th=[12125], 80.00th=[13829], 90.00th=[14877], 95.00th=[16909],
00:36:47.128       | 99.00th=[20055], 99.50th=[21627], 99.90th=[22152], 99.95th=[23987],
00:36:47.128       | 99.99th=[25822]
00:36:47.128     bw (  KiB/s): min=20480, max=22968, per=29.66%, avg=21724.00, stdev=1759.28, samples=2
00:36:47.128     iops        : min= 5120, max= 5742, avg=5431.00, stdev=439.82, samples=2
00:36:47.128    lat (msec)   : 4=0.37%, 10=22.36%, 20=74.85%, 50=2.41%
00:36:47.128    cpu          : usr=4.75%, sys=6.53%, ctx=383, majf=0, minf=1
00:36:47.128    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4%
00:36:47.129       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:47.129       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:36:47.129       issued rwts: total=5120,5558,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:47.129       latency   : target=0, window=0, percentile=100.00%, depth=128
00:36:47.129  
00:36:47.129  Run status group 0 (all jobs):
00:36:47.129     READ: bw=65.9MiB/s (69.1MB/s), 11.9MiB/s-20.3MiB/s (12.5MB/s-21.3MB/s), io=66.8MiB (70.0MB), run=1007-1013msec
00:36:47.129    WRITE: bw=71.5MiB/s (75.0MB/s), 12.7MiB/s-21.8MiB/s (13.3MB/s-22.8MB/s), io=72.5MiB (76.0MB), run=1007-1013msec
00:36:47.129  
00:36:47.129  Disk stats (read/write):
00:36:47.129    nvme0n1: ios=3634/3598, merge=0/0, ticks=44613/36377, in_queue=80990, util=90.38%
00:36:47.129    nvme0n2: ios=2203/2560, merge=0/0, ticks=26134/25616, in_queue=51750, util=90.65%
00:36:47.129    nvme0n3: ios=4485/4608, merge=0/0, ticks=46065/39771, in_queue=85836, util=97.39%
00:36:47.129    nvme0n4: ios=4254/4608, merge=0/0, ticks=52727/49876, in_queue=102603, util=96.62%
00:36:47.129   00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync
00:36:47.129   00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3320138
00:36:47.129   00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10
00:36:47.129   00:18:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3
00:36:47.129  [global]
00:36:47.129  thread=1
00:36:47.129  invalidate=1
00:36:47.129  rw=read
00:36:47.129  time_based=1
00:36:47.129  runtime=10
00:36:47.129  ioengine=libaio
00:36:47.129  direct=1
00:36:47.129  bs=4096
00:36:47.129  iodepth=1
00:36:47.129  norandommap=1
00:36:47.129  numjobs=1
00:36:47.129  
00:36:47.129  [job0]
00:36:47.129  filename=/dev/nvme0n1
00:36:47.129  [job1]
00:36:47.129  filename=/dev/nvme0n2
00:36:47.129  [job2]
00:36:47.129  filename=/dev/nvme0n3
00:36:47.129  [job3]
00:36:47.129  filename=/dev/nvme0n4
00:36:47.129  Could not set queue depth (nvme0n1)
00:36:47.129  Could not set queue depth (nvme0n2)
00:36:47.129  Could not set queue depth (nvme0n3)
00:36:47.129  Could not set queue depth (nvme0n4)
00:36:47.129  job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:36:47.129  job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:36:47.129  job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:36:47.129  job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1
00:36:47.129  fio-3.35
00:36:47.129  Starting 4 threads
00:36:50.459   00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0
00:36:50.459  fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=17846272, buflen=4096
00:36:50.459  fio: pid=3320466, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:36:50.459   00:18:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0
00:36:50.459   00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:36:50.459   00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0
00:36:50.459  fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=32575488, buflen=4096
00:36:50.459  fio: pid=3320464, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:36:50.459   00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:36:50.459   00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1
00:36:50.459  fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=847872, buflen=4096
00:36:50.459  fio: pid=3320446, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:36:50.717  fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3747840, buflen=4096
00:36:50.717  fio: pid=3320462, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported
00:36:50.717   00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:36:50.717   00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2
00:36:50.717  
00:36:50.717  job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3320446: Tue Dec 10 00:18:06 2024
00:36:50.717    read: IOPS=66, BW=264KiB/s (271kB/s)(828KiB/3132msec)
00:36:50.717      slat (usec): min=6, max=17815, avg=99.00, stdev=1234.41
00:36:50.717      clat (usec): min=262, max=41926, avg=14925.78, stdev=19492.56
00:36:50.717       lat (usec): min=271, max=58919, avg=15025.16, stdev=19653.86
00:36:50.717      clat percentiles (usec):
00:36:50.717       |  1.00th=[  281],  5.00th=[  359], 10.00th=[  388], 20.00th=[  404],
00:36:50.717       | 30.00th=[  429], 40.00th=[  441], 50.00th=[  453], 60.00th=[  465],
00:36:50.717       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:36:50.717       | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681],
00:36:50.717       | 99.99th=[41681]
00:36:50.717     bw (  KiB/s): min=   94, max= 1144, per=1.68%, avg=271.67, stdev=427.37, samples=6
00:36:50.717     iops        : min=   23, max=  286, avg=67.83, stdev=106.88, samples=6
00:36:50.717    lat (usec)   : 500=63.46%, 750=0.48%
00:36:50.717    lat (msec)   : 50=35.58%
00:36:50.717    cpu          : usr=0.00%, sys=0.22%, ctx=210, majf=0, minf=1
00:36:50.717    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:50.717       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:50.717       complete  : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:50.717       issued rwts: total=208,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:50.717       latency   : target=0, window=0, percentile=100.00%, depth=1
00:36:50.717  job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3320462: Tue Dec 10 00:18:06 2024
00:36:50.717    read: IOPS=275, BW=1100KiB/s (1126kB/s)(3660KiB/3328msec)
00:36:50.717      slat (usec): min=6, max=11822, avg=32.49, stdev=505.77
00:36:50.717      clat (usec): min=183, max=45020, avg=3578.69, stdev=11209.29
00:36:50.717       lat (usec): min=191, max=51015, avg=3611.21, stdev=11255.41
00:36:50.717      clat percentiles (usec):
00:36:50.717       |  1.00th=[  188],  5.00th=[  190], 10.00th=[  194], 20.00th=[  204],
00:36:50.717       | 30.00th=[  212], 40.00th=[  219], 50.00th=[  227], 60.00th=[  245],
00:36:50.717       | 70.00th=[  253], 80.00th=[  262], 90.00th=[  379], 95.00th=[41157],
00:36:50.717       | 99.00th=[41157], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827],
00:36:50.717       | 99.99th=[44827]
00:36:50.717     bw (  KiB/s): min=   94, max= 3488, per=4.10%, avg=662.33, stdev=1384.29, samples=6
00:36:50.717     iops        : min=   23, max=  872, avg=165.50, stdev=346.11, samples=6
00:36:50.717    lat (usec)   : 250=64.30%, 500=27.40%
00:36:50.717    lat (msec)   : 50=8.19%
00:36:50.717    cpu          : usr=0.27%, sys=0.36%, ctx=922, majf=0, minf=2
00:36:50.717    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:50.717       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:50.717       complete  : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:50.717       issued rwts: total=916,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:50.717       latency   : target=0, window=0, percentile=100.00%, depth=1
00:36:50.717  job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3320464: Tue Dec 10 00:18:06 2024
00:36:50.717    read: IOPS=2721, BW=10.6MiB/s (11.1MB/s)(31.1MiB/2923msec)
00:36:50.717      slat (nsec): min=6877, max=59720, avg=8251.09, stdev=1912.32
00:36:50.717      clat (usec): min=170, max=41509, avg=354.46, stdev=2204.43
00:36:50.717       lat (usec): min=185, max=41518, avg=362.71, stdev=2204.69
00:36:50.717      clat percentiles (usec):
00:36:50.717       |  1.00th=[  196],  5.00th=[  204], 10.00th=[  208], 20.00th=[  212],
00:36:50.717       | 30.00th=[  217], 40.00th=[  221], 50.00th=[  227], 60.00th=[  231],
00:36:50.717       | 70.00th=[  239], 80.00th=[  247], 90.00th=[  255], 95.00th=[  273],
00:36:50.717       | 99.00th=[  441], 99.50th=[  486], 99.90th=[41157], 99.95th=[41157],
00:36:50.717       | 99.99th=[41681]
00:36:50.717     bw (  KiB/s): min= 1008, max=17704, per=72.60%, avg=11720.00, stdev=7059.92, samples=5
00:36:50.717     iops        : min=  252, max= 4426, avg=2930.00, stdev=1764.98, samples=5
00:36:50.717    lat (usec)   : 250=85.54%, 500=14.02%, 750=0.10%, 1000=0.01%
00:36:50.717    lat (msec)   : 2=0.01%, 50=0.30%
00:36:50.717    cpu          : usr=1.51%, sys=4.45%, ctx=7956, majf=0, minf=2
00:36:50.717    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:50.717       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:50.717       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:50.717       issued rwts: total=7954,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:50.717       latency   : target=0, window=0, percentile=100.00%, depth=1
00:36:50.717  job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3320466: Tue Dec 10 00:18:06 2024
00:36:50.717    read: IOPS=1623, BW=6491KiB/s (6647kB/s)(17.0MiB/2685msec)
00:36:50.717      slat (nsec): min=6995, max=38434, avg=8295.63, stdev=1733.27
00:36:50.717      clat (usec): min=177, max=41210, avg=599.67, stdev=3812.70
00:36:50.717       lat (usec): min=189, max=41221, avg=607.97, stdev=3813.39
00:36:50.717      clat percentiles (usec):
00:36:50.717       |  1.00th=[  204],  5.00th=[  217], 10.00th=[  221], 20.00th=[  225],
00:36:50.717       | 30.00th=[  227], 40.00th=[  229], 50.00th=[  231], 60.00th=[  233],
00:36:50.717       | 70.00th=[  237], 80.00th=[  243], 90.00th=[  255], 95.00th=[  293],
00:36:50.717       | 99.00th=[  529], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157],
00:36:50.717       | 99.99th=[41157]
00:36:50.717     bw (  KiB/s): min=  440, max=15320, per=38.45%, avg=6208.00, stdev=7630.60, samples=5
00:36:50.717     iops        : min=  110, max= 3830, avg=1552.00, stdev=1907.65, samples=5
00:36:50.717    lat (usec)   : 250=87.38%, 500=11.56%, 750=0.14%
00:36:50.717    lat (msec)   : 50=0.89%
00:36:50.717    cpu          : usr=0.97%, sys=2.57%, ctx=4358, majf=0, minf=1
00:36:50.717    IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:36:50.717       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:50.717       complete  : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:36:50.717       issued rwts: total=4358,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:36:50.717       latency   : target=0, window=0, percentile=100.00%, depth=1
00:36:50.717  
00:36:50.717  Run status group 0 (all jobs):
00:36:50.717     READ: bw=15.8MiB/s (16.5MB/s), 264KiB/s-10.6MiB/s (271kB/s-11.1MB/s), io=52.5MiB (55.0MB), run=2685-3328msec
00:36:50.717  
00:36:50.717  Disk stats (read/write):
00:36:50.717    nvme0n1: ios=205/0, merge=0/0, ticks=3008/0, in_queue=3008, util=93.68%
00:36:50.717    nvme0n2: ios=944/0, merge=0/0, ticks=4019/0, in_queue=4019, util=99.14%
00:36:50.717    nvme0n3: ios=7945/0, merge=0/0, ticks=2587/0, in_queue=2587, util=96.16%
00:36:50.717    nvme0n4: ios=4082/0, merge=0/0, ticks=2465/0, in_queue=2465, util=96.38%
00:36:50.975   00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:36:50.975   00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3
00:36:51.232   00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:36:51.232   00:18:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4
00:36:51.489   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:36:51.489   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5
00:36:51.746   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs
00:36:51.746   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3320138
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:36:52.003  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']'
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected'
00:36:52.003  nvmf hotplug test: fio failed as expected
00:36:52.003   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:36:52.261   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state
00:36:52.261   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state
00:36:52.261   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state
00:36:52.261   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT
00:36:52.261   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini
00:36:52.261   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup
00:36:52.261   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync
00:36:52.261   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:36:52.261   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e
00:36:52.261   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20}
00:36:52.261   00:18:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:36:52.261  rmmod nvme_tcp
00:36:52.261  rmmod nvme_fabrics
00:36:52.261  rmmod nvme_keyring
00:36:52.261   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:36:52.261   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e
00:36:52.261   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0
00:36:52.261   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3317395 ']'
00:36:52.261   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3317395
00:36:52.261   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3317395 ']'
00:36:52.261   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3317395
00:36:52.261    00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname
00:36:52.261   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:36:52.261    00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3317395
00:36:52.261   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:36:52.261   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:36:52.261   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3317395'
00:36:52.261  killing process with pid 3317395
00:36:52.261   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3317395
00:36:52.261   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3317395
00:36:52.520   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:36:52.520   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:36:52.520   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:36:52.520   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr
00:36:52.520   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save
00:36:52.520   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:36:52.520   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore
00:36:52.520   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:36:52.520   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns
00:36:52.520   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:36:52.520   00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:36:52.520    00:18:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:36:55.064   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:36:55.064  
00:36:55.064  real	0m26.492s
00:36:55.064  user	1m31.612s
00:36:55.064  sys	0m10.860s
00:36:55.064   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable
00:36:55.064   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x
00:36:55.064  ************************************
00:36:55.064  END TEST nvmf_fio_target
00:36:55.064  ************************************
00:36:55.064   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode
00:36:55.064   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:36:55.064   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable
00:36:55.064   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:36:55.064  ************************************
00:36:55.064  START TEST nvmf_bdevio
00:36:55.064  ************************************
00:36:55.064   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode
00:36:55.064  * Looking for test storage...
00:36:55.064  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:36:55.064     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version
00:36:55.064     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-:
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-:
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<'
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 ))
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:36:55.064     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1
00:36:55.064     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1
00:36:55.064     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:36:55.064     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1
00:36:55.064     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2
00:36:55.064     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2
00:36:55.064     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:36:55.064     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:36:55.064  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:55.064  		--rc genhtml_branch_coverage=1
00:36:55.064  		--rc genhtml_function_coverage=1
00:36:55.064  		--rc genhtml_legend=1
00:36:55.064  		--rc geninfo_all_blocks=1
00:36:55.064  		--rc geninfo_unexecuted_blocks=1
00:36:55.064  		
00:36:55.064  		'
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:36:55.064  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:55.064  		--rc genhtml_branch_coverage=1
00:36:55.064  		--rc genhtml_function_coverage=1
00:36:55.064  		--rc genhtml_legend=1
00:36:55.064  		--rc geninfo_all_blocks=1
00:36:55.064  		--rc geninfo_unexecuted_blocks=1
00:36:55.064  		
00:36:55.064  		'
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:36:55.064  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:55.064  		--rc genhtml_branch_coverage=1
00:36:55.064  		--rc genhtml_function_coverage=1
00:36:55.064  		--rc genhtml_legend=1
00:36:55.064  		--rc geninfo_all_blocks=1
00:36:55.064  		--rc geninfo_unexecuted_blocks=1
00:36:55.064  		
00:36:55.064  		'
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:36:55.064  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:36:55.064  		--rc genhtml_branch_coverage=1
00:36:55.064  		--rc genhtml_function_coverage=1
00:36:55.064  		--rc genhtml_legend=1
00:36:55.064  		--rc geninfo_all_blocks=1
00:36:55.064  		--rc geninfo_unexecuted_blocks=1
00:36:55.064  		
00:36:55.064  		'
00:36:55.064   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:36:55.064     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:36:55.064     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:36:55.064    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:36:55.065     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob
00:36:55.065     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:36:55.065     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:36:55.065     00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:36:55.065      00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:55.065      00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:55.065      00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:55.065      00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH
00:36:55.065      00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:36:55.065    00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable
00:36:55.065   00:18:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=()
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=()
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=()
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=()
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=()
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=()
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=()
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:37:00.351  Found 0000:af:00.0 (0x8086 - 0x159b)
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:37:00.351  Found 0000:af:00.1 (0x8086 - 0x159b)
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:37:00.351  Found net devices under 0000:af:00.0: cvl_0_0
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:37:00.351  Found net devices under 0000:af:00.1: cvl_0_1
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:37:00.351   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:37:00.352   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:37:00.352   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:37:00.352   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:37:00.352   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:37:00.352   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:37:00.352   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:37:00.352   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:37:00.352   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:37:00.352   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:37:00.610  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:37:00.610  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms
00:37:00.610  
00:37:00.610  --- 10.0.0.2 ping statistics ---
00:37:00.610  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:37:00.610  rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:37:00.610  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:37:00.610  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms
00:37:00.610  
00:37:00.610  --- 10.0.0.1 ping statistics ---
00:37:00.610  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:37:00.610  rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:37:00.610   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:37:00.611   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:37:00.611   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:37:00.611   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:37:00.611   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:37:00.869   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78
00:37:00.869   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:37:00.869   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable
00:37:00.869   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:37:00.869   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3325020
00:37:00.869   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3325020
00:37:00.869   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78
00:37:00.869   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3325020 ']'
00:37:00.869   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:37:00.869   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100
00:37:00.869   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:37:00.869  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:37:00.869   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable
00:37:00.869   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:37:00.869  [2024-12-10 00:18:16.544589] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:37:00.869  [2024-12-10 00:18:16.545561] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:37:00.869  [2024-12-10 00:18:16.545600] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:37:00.869  [2024-12-10 00:18:16.625093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:37:00.869  [2024-12-10 00:18:16.666819] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:37:00.869  [2024-12-10 00:18:16.666855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:37:00.869  [2024-12-10 00:18:16.666862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:37:00.869  [2024-12-10 00:18:16.666868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:37:00.869  [2024-12-10 00:18:16.666873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:37:00.869  [2024-12-10 00:18:16.668382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:37:00.869  [2024-12-10 00:18:16.668493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5
00:37:00.870  [2024-12-10 00:18:16.668600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:37:00.870  [2024-12-10 00:18:16.668601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6
00:37:01.128  [2024-12-10 00:18:16.736772] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:37:01.128  [2024-12-10 00:18:16.737573] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:37:01.128  [2024-12-10 00:18:16.737784] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode.
00:37:01.128  [2024-12-10 00:18:16.738205] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:37:01.128  [2024-12-10 00:18:16.738239] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode.
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:37:01.128  [2024-12-10 00:18:16.805369] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:37:01.128  Malloc0
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:37:01.128  [2024-12-10 00:18:16.889566] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:01.128   00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62
00:37:01.129    00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json
00:37:01.129    00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=()
00:37:01.129    00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config
00:37:01.129    00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:37:01.129    00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:37:01.129  {
00:37:01.129    "params": {
00:37:01.129      "name": "Nvme$subsystem",
00:37:01.129      "trtype": "$TEST_TRANSPORT",
00:37:01.129      "traddr": "$NVMF_FIRST_TARGET_IP",
00:37:01.129      "adrfam": "ipv4",
00:37:01.129      "trsvcid": "$NVMF_PORT",
00:37:01.129      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:37:01.129      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:37:01.129      "hdgst": ${hdgst:-false},
00:37:01.129      "ddgst": ${ddgst:-false}
00:37:01.129    },
00:37:01.129    "method": "bdev_nvme_attach_controller"
00:37:01.129  }
00:37:01.129  EOF
00:37:01.129  )")
00:37:01.129     00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat
00:37:01.129    00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq .
00:37:01.129     00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=,
00:37:01.129     00:18:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:37:01.129    "params": {
00:37:01.129      "name": "Nvme1",
00:37:01.129      "trtype": "tcp",
00:37:01.129      "traddr": "10.0.0.2",
00:37:01.129      "adrfam": "ipv4",
00:37:01.129      "trsvcid": "4420",
00:37:01.129      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:37:01.129      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:37:01.129      "hdgst": false,
00:37:01.129      "ddgst": false
00:37:01.129    },
00:37:01.129    "method": "bdev_nvme_attach_controller"
00:37:01.129  }'
00:37:01.129  [2024-12-10 00:18:16.942254] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:37:01.129  [2024-12-10 00:18:16.942300] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3325048 ]
00:37:01.387  [2024-12-10 00:18:17.018443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:37:01.387  [2024-12-10 00:18:17.060539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:37:01.387  [2024-12-10 00:18:17.060648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:37:01.387  [2024-12-10 00:18:17.060649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:37:01.387  I/O targets:
00:37:01.387    Nvme1n1: 131072 blocks of 512 bytes (64 MiB)
00:37:01.387  
00:37:01.387  
00:37:01.387       CUnit - A unit testing framework for C - Version 2.1-3
00:37:01.387       http://cunit.sourceforge.net/
00:37:01.387  
00:37:01.387  
00:37:01.387  Suite: bdevio tests on: Nvme1n1
00:37:01.651    Test: blockdev write read block ...passed
00:37:01.651    Test: blockdev write zeroes read block ...passed
00:37:01.651    Test: blockdev write zeroes read no split ...passed
00:37:01.651    Test: blockdev write zeroes read split ...passed
00:37:01.651    Test: blockdev write zeroes read split partial ...passed
00:37:01.651    Test: blockdev reset ...[2024-12-10 00:18:17.359259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller
00:37:01.651  [2024-12-10 00:18:17.359318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecb610 (9): Bad file descriptor
00:37:01.651  [2024-12-10 00:18:17.402885] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful.
00:37:01.651  passed
00:37:01.651    Test: blockdev write read 8 blocks ...passed
00:37:01.651    Test: blockdev write read size > 128k ...passed
00:37:01.651    Test: blockdev write read invalid size ...passed
00:37:01.651    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:37:01.651    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:37:01.651    Test: blockdev write read max offset ...passed
00:37:01.912    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:37:01.912    Test: blockdev writev readv 8 blocks ...passed
00:37:01.912    Test: blockdev writev readv 30 x 1block ...passed
00:37:01.912    Test: blockdev writev readv block ...passed
00:37:01.912    Test: blockdev writev readv size > 128k ...passed
00:37:01.912    Test: blockdev writev readv size > 128k in two iovs ...passed
00:37:01.912    Test: blockdev comparev and writev ...[2024-12-10 00:18:17.694061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:37:01.912  [2024-12-10 00:18:17.694088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0
00:37:01.912  [2024-12-10 00:18:17.694102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:37:01.912  [2024-12-10 00:18:17.694109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0
00:37:01.912  [2024-12-10 00:18:17.694399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:37:01.912  [2024-12-10 00:18:17.694410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0
00:37:01.912  [2024-12-10 00:18:17.694422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:37:01.912  [2024-12-10 00:18:17.694430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0
00:37:01.912  [2024-12-10 00:18:17.694717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:37:01.912  [2024-12-10 00:18:17.694727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0
00:37:01.912  [2024-12-10 00:18:17.694738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:37:01.912  [2024-12-10 00:18:17.694745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0
00:37:01.912  [2024-12-10 00:18:17.695024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:37:01.912  [2024-12-10 00:18:17.695035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0
00:37:01.912  [2024-12-10 00:18:17.695047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200
00:37:01.912  [2024-12-10 00:18:17.695053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0
00:37:01.912  passed
00:37:02.171    Test: blockdev nvme passthru rw ...passed
00:37:02.171    Test: blockdev nvme passthru vendor specific ...[2024-12-10 00:18:17.777569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:37:02.171  [2024-12-10 00:18:17.777588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0
00:37:02.171  [2024-12-10 00:18:17.777700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:37:02.171  [2024-12-10 00:18:17.777710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0
00:37:02.171  [2024-12-10 00:18:17.777815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:37:02.171  [2024-12-10 00:18:17.777825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0
00:37:02.171  [2024-12-10 00:18:17.777937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0
00:37:02.171  [2024-12-10 00:18:17.777947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0
00:37:02.171  passed
00:37:02.171    Test: blockdev nvme admin passthru ...passed
00:37:02.171    Test: blockdev copy ...passed
00:37:02.171  
00:37:02.171  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:37:02.171                suites      1      1    n/a      0        0
00:37:02.171                 tests     23     23     23      0        0
00:37:02.171               asserts    152    152    152      0      n/a
00:37:02.171  
00:37:02.171  Elapsed time =    1.247 seconds
00:37:02.171   00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:37:02.171   00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:02.171   00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:37:02.171   00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:02.171   00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT
00:37:02.171   00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini
00:37:02.171   00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup
00:37:02.171   00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync
00:37:02.171   00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:37:02.171   00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e
00:37:02.171   00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20}
00:37:02.171   00:18:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:37:02.171  rmmod nvme_tcp
00:37:02.171  rmmod nvme_fabrics
00:37:02.171  rmmod nvme_keyring
00:37:02.171   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:37:02.171   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e
00:37:02.172   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0
00:37:02.172   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3325020 ']'
00:37:02.172   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3325020
00:37:02.172   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3325020 ']'
00:37:02.172   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3325020
00:37:02.431    00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:37:02.431    00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3325020
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']'
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3325020'
00:37:02.431  killing process with pid 3325020
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3325020
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3325020
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:37:02.431   00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null'
00:37:02.431    00:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:37:04.968   00:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:37:04.968  
00:37:04.968  real	0m9.945s
00:37:04.968  user	0m8.785s
00:37:04.968  sys	0m5.158s
00:37:04.968   00:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:37:04.968   00:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x
00:37:04.968  ************************************
00:37:04.968  END TEST nvmf_bdevio
00:37:04.968  ************************************
00:37:04.968   00:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:37:04.968  
00:37:04.968  real	4m32.987s
00:37:04.968  user	9m4.921s
00:37:04.968  sys	1m50.350s
00:37:04.968   00:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable
00:37:04.968   00:18:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x
00:37:04.968  ************************************
00:37:04.968  END TEST nvmf_target_core_interrupt_mode
00:37:04.968  ************************************
00:37:04.968   00:18:20 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode
00:37:04.968   00:18:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:37:04.968   00:18:20 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable
00:37:04.968   00:18:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:37:04.968  ************************************
00:37:04.968  START TEST nvmf_interrupt
00:37:04.968  ************************************
00:37:04.968   00:18:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode
00:37:04.968  * Looking for test storage...
00:37:04.968  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:37:04.968     00:18:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version
00:37:04.968     00:18:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-:
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-:
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<'
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 ))
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:37:04.968     00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1
00:37:04.968     00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1
00:37:04.968     00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:37:04.968     00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1
00:37:04.968     00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2
00:37:04.968     00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2
00:37:04.968     00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:37:04.968     00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:37:04.968  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:04.968  		--rc genhtml_branch_coverage=1
00:37:04.968  		--rc genhtml_function_coverage=1
00:37:04.968  		--rc genhtml_legend=1
00:37:04.968  		--rc geninfo_all_blocks=1
00:37:04.968  		--rc geninfo_unexecuted_blocks=1
00:37:04.968  		
00:37:04.968  		'
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:37:04.968  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:04.968  		--rc genhtml_branch_coverage=1
00:37:04.968  		--rc genhtml_function_coverage=1
00:37:04.968  		--rc genhtml_legend=1
00:37:04.968  		--rc geninfo_all_blocks=1
00:37:04.968  		--rc geninfo_unexecuted_blocks=1
00:37:04.968  		
00:37:04.968  		'
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:37:04.968  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:04.968  		--rc genhtml_branch_coverage=1
00:37:04.968  		--rc genhtml_function_coverage=1
00:37:04.968  		--rc genhtml_legend=1
00:37:04.968  		--rc geninfo_all_blocks=1
00:37:04.968  		--rc geninfo_unexecuted_blocks=1
00:37:04.968  		
00:37:04.968  		'
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:37:04.968  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:04.968  		--rc genhtml_branch_coverage=1
00:37:04.968  		--rc genhtml_function_coverage=1
00:37:04.968  		--rc genhtml_legend=1
00:37:04.968  		--rc geninfo_all_blocks=1
00:37:04.968  		--rc geninfo_unexecuted_blocks=1
00:37:04.968  		
00:37:04.968  		'
00:37:04.968   00:18:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:37:04.968     00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:37:04.968     00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:37:04.968    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:37:04.969     00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob
00:37:04.969     00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:37:04.969     00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:37:04.969     00:18:20 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:37:04.969      00:18:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:04.969      00:18:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:04.969      00:18:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:04.969      00:18:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH
00:37:04.969      00:18:20 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:04.969    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0
00:37:04.969    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:37:04.969    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:37:04.969    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:37:04.969    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:37:04.969    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:37:04.969    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']'
00:37:04.969    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode)
00:37:04.969    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:37:04.969    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:37:04.969    00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:37:04.969    00:18:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable
00:37:04.969   00:18:20 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=()
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=()
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=()
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=()
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=()
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=()
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=()
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:37:11.534  Found 0000:af:00.0 (0x8086 - 0x159b)
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:37:11.534  Found 0000:af:00.1 (0x8086 - 0x159b)
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:37:11.534   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]]
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:37:11.535  Found net devices under 0000:af:00.0: cvl_0_0
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]]
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:37:11.535  Found net devices under 0000:af:00.1: cvl_0_1
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:37:11.535  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:37:11.535  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms
00:37:11.535  
00:37:11.535  --- 10.0.0.2 ping statistics ---
00:37:11.535  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:37:11.535  rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:37:11.535  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:37:11.535  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms
00:37:11.535  
00:37:11.535  --- 10.0.0.1 ping statistics ---
00:37:11.535  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:37:11.535  rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3328750
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3328750
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3328750 ']'
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:37:11.535  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable
00:37:11.535   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:37:11.535  [2024-12-10 00:18:26.606553] thread.c:3083:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode.
00:37:11.535  [2024-12-10 00:18:26.607440] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:37:11.535  [2024-12-10 00:18:26.607472] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:37:11.535  [2024-12-10 00:18:26.671911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:37:11.535  [2024-12-10 00:18:26.716239] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:37:11.535  [2024-12-10 00:18:26.716269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:37:11.535  [2024-12-10 00:18:26.716276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:37:11.535  [2024-12-10 00:18:26.716282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:37:11.535  [2024-12-10 00:18:26.716287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:37:11.535  [2024-12-10 00:18:26.717358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:37:11.535  [2024-12-10 00:18:26.717361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:37:11.535  [2024-12-10 00:18:26.784305] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:37:11.535  [2024-12-10 00:18:26.784854] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode.
00:37:11.535  [2024-12-10 00:18:26.785051] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode.
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio
00:37:11.536    00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000
00:37:11.536  5000+0 records in
00:37:11.536  5000+0 records out
00:37:11.536  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0175289 s, 584 MB/s
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:37:11.536  AIO0
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:37:11.536  [2024-12-10 00:18:26.906114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:37:11.536  [2024-12-10 00:18:26.946436] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1}
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3328750 0
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3328750 0 idle
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3328750
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:37:11.536   00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:37:11.536    00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3328750 -w 256
00:37:11.536    00:18:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3328750 root      20   0  128.2g  46848  33792 S   6.7   0.0   0:00.23 reactor_0'
00:37:11.536    00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:37:11.536    00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3328750 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:00.23 reactor_0
00:37:11.536    00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1}
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3328750 1
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3328750 1 idle
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3328750
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:37:11.536    00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3328750 -w 256
00:37:11.536    00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3328754 root      20   0  128.2g  46848  33792 S   0.0   0.0   0:00.00 reactor_1'
00:37:11.536    00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3328754 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:00.00 reactor_1
00:37:11.536    00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:37:11.536    00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3328976
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1}
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1'
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3328750 0
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3328750 0 busy
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3328750
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:37:11.536   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:37:11.536    00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3328750 -w 256
00:37:11.536    00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:37:11.795   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3328750 root      20   0  128.2g  47616  34560 S   0.0   0.0   0:00.23 reactor_0'
00:37:11.795    00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3328750 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:00.23 reactor_0
00:37:11.795    00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:37:11.795    00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:37:11.795   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:37:11.795   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:37:11.795   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:37:11.795   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:37:11.795   00:18:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1
00:37:12.732   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- ))
00:37:12.732   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:37:12.732    00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3328750 -w 256
00:37:12.732    00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3328750 root      20   0  128.2g  47616  34560 R  99.9   0.0   0:02.53 reactor_0'
00:37:12.994    00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3328750 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.53 reactor_0
00:37:12.994    00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:37:12.994    00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1}
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3328750 1
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3328750 1 busy
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3328750
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:37:12.994   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:37:12.994    00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3328750 -w 256
00:37:12.994    00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:37:13.253   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3328754 root      20   0  128.2g  47616  34560 R  99.9   0.0   0:01.34 reactor_1'
00:37:13.253    00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3328754 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:01.34 reactor_1
00:37:13.253    00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:37:13.253    00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:37:13.253   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:37:13.253   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:37:13.253   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:37:13.253   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:37:13.253   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:37:13.253   00:18:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:37:13.253   00:18:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3328976
00:37:23.219  Initializing NVMe Controllers
00:37:23.219  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1
00:37:23.219  Controller IO queue size 256, less than required.
00:37:23.219  Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver.
00:37:23.219  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2
00:37:23.219  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3
00:37:23.219  Initialization complete. Launching workers.
00:37:23.219  ========================================================
00:37:23.219                                                                                                               Latency(us)
00:37:23.219  Device Information                                                       :       IOPS      MiB/s    Average        min        max
00:37:23.219  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:   16380.30      63.99   15636.72    3404.04   29337.72
00:37:23.219  TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:   16514.90      64.51   15505.13    7831.48   25136.03
00:37:23.219  ========================================================
00:37:23.219  Total                                                                    :   32895.20     128.50   15570.65    3404.04   29337.72
00:37:23.219  
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1}
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3328750 0
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3328750 0 idle
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3328750
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:37:23.219    00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3328750 -w 256
00:37:23.219    00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3328750 root      20   0  128.2g  47616  34560 S   0.0   0.0   0:20.23 reactor_0'
00:37:23.219    00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3328750 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:20.23 reactor_0
00:37:23.219    00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:37:23.219    00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1}
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3328750 1
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3328750 1 idle
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3328750
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:37:23.219    00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3328750 -w 256
00:37:23.219    00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3328754 root      20   0  128.2g  47616  34560 S   0.0   0.0   0:10.00 reactor_1'
00:37:23.219    00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3328754 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1
00:37:23.219    00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:37:23.219    00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:37:23.219   00:18:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420
00:37:23.219   00:18:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME
00:37:23.219   00:18:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0
00:37:23.219   00:18:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0
00:37:23.219   00:18:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]]
00:37:23.219   00:18:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 ))
00:37:24.597    00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL
00:37:24.597    00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter ))
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1}
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3328750 0
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3328750 0 idle
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3328750
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:37:24.597   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:37:24.597    00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3328750 -w 256
00:37:24.597    00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3328750 root      20   0  128.2g  73728  34560 S   0.0   0.1   0:20.46 reactor_0'
00:37:24.855    00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3328750 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:20.46 reactor_0
00:37:24.855    00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:37:24.855    00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1}
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3328750 1
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3328750 1 idle
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3328750
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:37:24.855    00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3328750 -w 256
00:37:24.855    00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3328754 root      20   0  128.2g  73728  34560 S   0.0   0.1   0:10.10 reactor_1'
00:37:24.855    00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3328754 root 20 0 128.2g 73728 34560 S 0.0 0.1 0:10.10 reactor_1
00:37:24.855    00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:37:24.855    00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0
00:37:24.855   00:18:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1
00:37:25.114  NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s)
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20}
00:37:25.114   00:18:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:37:25.114  rmmod nvme_tcp
00:37:25.114  rmmod nvme_fabrics
00:37:25.114  rmmod nvme_keyring
00:37:25.372   00:18:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:37:25.372   00:18:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e
00:37:25.372   00:18:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0
00:37:25.372   00:18:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3328750 ']'
00:37:25.372   00:18:40 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3328750
00:37:25.372   00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3328750 ']'
00:37:25.372   00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3328750
00:37:25.372    00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname
00:37:25.372   00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:37:25.372    00:18:40 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3328750
00:37:25.372   00:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:37:25.372   00:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:37:25.372   00:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3328750'
00:37:25.372  killing process with pid 3328750
00:37:25.372   00:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3328750
00:37:25.372   00:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3328750
00:37:25.372   00:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:37:25.373   00:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:37:25.373   00:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:37:25.373   00:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr
00:37:25.373   00:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save
00:37:25.373   00:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:37:25.373   00:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore
00:37:25.630   00:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:37:25.630   00:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns
00:37:25.630   00:18:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:37:25.630   00:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null'
00:37:25.630    00:18:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:37:27.558   00:18:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:37:27.558  
00:37:27.558  real	0m22.846s
00:37:27.558  user	0m39.822s
00:37:27.558  sys	0m8.268s
00:37:27.558   00:18:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable
00:37:27.558   00:18:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x
00:37:27.558  ************************************
00:37:27.558  END TEST nvmf_interrupt
00:37:27.558  ************************************
00:37:27.558  
00:37:27.558  real	27m26.602s
00:37:27.558  user	56m42.455s
00:37:27.558  sys	9m18.962s
00:37:27.558   00:18:43 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:37:27.558   00:18:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:37:27.558  ************************************
00:37:27.558  END TEST nvmf_tcp
00:37:27.558  ************************************
00:37:27.558   00:18:43  -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]]
00:37:27.558   00:18:43  -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp
00:37:27.558   00:18:43  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:37:27.558   00:18:43  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:37:27.558   00:18:43  -- common/autotest_common.sh@10 -- # set +x
00:37:27.558  ************************************
00:37:27.558  START TEST spdkcli_nvmf_tcp
00:37:27.558  ************************************
00:37:27.558   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp
00:37:27.817  * Looking for test storage...
00:37:27.817  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:37:27.817     00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:37:27.817     00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:37:27.817     00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1
00:37:27.817     00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1
00:37:27.817     00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:37:27.817     00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:37:27.817     00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2
00:37:27.817     00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2
00:37:27.817     00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:37:27.817     00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:37:27.817  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:27.817  		--rc genhtml_branch_coverage=1
00:37:27.817  		--rc genhtml_function_coverage=1
00:37:27.817  		--rc genhtml_legend=1
00:37:27.817  		--rc geninfo_all_blocks=1
00:37:27.817  		--rc geninfo_unexecuted_blocks=1
00:37:27.817  		
00:37:27.817  		'
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:37:27.817  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:27.817  		--rc genhtml_branch_coverage=1
00:37:27.817  		--rc genhtml_function_coverage=1
00:37:27.817  		--rc genhtml_legend=1
00:37:27.817  		--rc geninfo_all_blocks=1
00:37:27.817  		--rc geninfo_unexecuted_blocks=1
00:37:27.817  		
00:37:27.817  		'
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:37:27.817  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:27.817  		--rc genhtml_branch_coverage=1
00:37:27.817  		--rc genhtml_function_coverage=1
00:37:27.817  		--rc genhtml_legend=1
00:37:27.817  		--rc geninfo_all_blocks=1
00:37:27.817  		--rc geninfo_unexecuted_blocks=1
00:37:27.817  		
00:37:27.817  		'
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:37:27.817  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:27.817  		--rc genhtml_branch_coverage=1
00:37:27.817  		--rc genhtml_function_coverage=1
00:37:27.817  		--rc genhtml_legend=1
00:37:27.817  		--rc geninfo_all_blocks=1
00:37:27.817  		--rc geninfo_unexecuted_blocks=1
00:37:27.817  		
00:37:27.817  		'
00:37:27.817   00:18:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py
00:37:27.817   00:18:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:37:27.817     00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:37:27.817    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:37:27.818     00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:37:27.818     00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob
00:37:27.818     00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:37:27.818     00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:37:27.818     00:18:43 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:37:27.818      00:18:43 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:27.818      00:18:43 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:27.818      00:18:43 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:27.818      00:18:43 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH
00:37:27.818      00:18:43 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:37:27.818  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:37:27.818    00:18:43 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3331626
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3331626
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3331626 ']'
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:37:27.818  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:37:27.818   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:37:27.818  [2024-12-10 00:18:43.655770] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:37:27.818  [2024-12-10 00:18:43.655817] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3331626 ]
00:37:28.077  [2024-12-10 00:18:43.728621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:37:28.077  [2024-12-10 00:18:43.767823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:37:28.077  [2024-12-10 00:18:43.767825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:37:28.077   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:37:28.077   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0
00:37:28.077   00:18:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt
00:37:28.077   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:28.077   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:37:28.077   00:18:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1
00:37:28.077   00:18:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]]
00:37:28.077   00:18:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config
00:37:28.077   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:37:28.077   00:18:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:37:28.077   00:18:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True
00:37:28.077  '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True
00:37:28.077  '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True
00:37:28.077  '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True
00:37:28.077  '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True
00:37:28.077  '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True
00:37:28.077  '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True
00:37:28.077  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW  max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create  tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True
00:37:28.077  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create  tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True
00:37:28.077  '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\''
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True
00:37:28.077  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True
00:37:28.078  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True
00:37:28.078  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True
00:37:28.078  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True
00:37:28.078  '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\''
00:37:28.078  '
00:37:31.376  [2024-12-10 00:18:46.598908] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:37:32.324  [2024-12-10 00:18:47.931355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 ***
00:37:34.856  [2024-12-10 00:18:50.415110] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 ***
00:37:36.758  [2024-12-10 00:18:52.565827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 ***
00:37:38.661  Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True]
00:37:38.661  Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True]
00:37:38.661  Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True]
00:37:38.661  Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True]
00:37:38.661  Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True]
00:37:38.661  Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True]
00:37:38.661  Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True]
00:37:38.661  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW  max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create  tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True]
00:37:38.661  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create  tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True]
00:37:38.661  Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create  tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create  nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True]
00:37:38.661  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True]
00:37:38.661  Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False]
00:37:38.661   00:18:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config
00:37:38.661   00:18:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:38.661   00:18:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:37:38.661   00:18:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match
00:37:38.661   00:18:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:37:38.661   00:18:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:37:38.661   00:18:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match
00:37:38.661   00:18:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf
00:37:38.920   00:18:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match
00:37:39.179   00:18:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test
00:37:39.179   00:18:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match
00:37:39.179   00:18:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:39.179   00:18:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:37:39.179   00:18:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config
00:37:39.179   00:18:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:37:39.179   00:18:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:37:39.179   00:18:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\''
00:37:39.179  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\''
00:37:39.179  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\''
00:37:39.179  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\''
00:37:39.179  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\''
00:37:39.179  '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\''
00:37:39.179  '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\''
00:37:39.179  '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\''
00:37:39.179  '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\''
00:37:39.179  '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\''
00:37:39.179  '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\''
00:37:39.179  '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\''
00:37:39.179  '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\''
00:37:39.179  '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\''
00:37:39.179  '
00:37:45.744  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False]
00:37:45.744  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False]
00:37:45.744  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False]
00:37:45.744  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False]
00:37:45.744  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False]
00:37:45.744  Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False]
00:37:45.744  Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False]
00:37:45.744  Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False]
00:37:45.744  Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False]
00:37:45.744  Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False]
00:37:45.744  Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False]
00:37:45.744  Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False]
00:37:45.744  Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False]
00:37:45.744  Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False]
00:37:45.744   00:19:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config
00:37:45.744   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:45.744   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:37:45.744   00:19:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3331626
00:37:45.744   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3331626 ']'
00:37:45.744   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3331626
00:37:45.745    00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:37:45.745    00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3331626
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3331626'
00:37:45.745  killing process with pid 3331626
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3331626
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3331626
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']'
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3331626 ']'
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3331626
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3331626 ']'
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3331626
00:37:45.745  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3331626) - No such process
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3331626 is not found'
00:37:45.745  Process with pid 3331626 is not found
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']'
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']'
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio
00:37:45.745  
00:37:45.745  real	0m17.329s
00:37:45.745  user	0m38.169s
00:37:45.745  sys	0m0.807s
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:37:45.745   00:19:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x
00:37:45.745  ************************************
00:37:45.745  END TEST spdkcli_nvmf_tcp
00:37:45.745  ************************************
00:37:45.745   00:19:00  -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp
00:37:45.745   00:19:00  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:37:45.745   00:19:00  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:37:45.745   00:19:00  -- common/autotest_common.sh@10 -- # set +x
00:37:45.745  ************************************
00:37:45.745  START TEST nvmf_identify_passthru
00:37:45.745  ************************************
00:37:45.745   00:19:00 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp
00:37:45.745  * Looking for test storage...
00:37:45.745  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:37:45.745    00:19:00 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:37:45.745     00:19:00 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version
00:37:45.745     00:19:00 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:37:45.745    00:19:00 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-:
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-:
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<'
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 ))
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:37:45.745     00:19:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1
00:37:45.745     00:19:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1
00:37:45.745     00:19:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:37:45.745     00:19:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1
00:37:45.745     00:19:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2
00:37:45.745     00:19:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2
00:37:45.745     00:19:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:37:45.745     00:19:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:37:45.745    00:19:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0
00:37:45.745    00:19:00 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:37:45.745    00:19:00 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:37:45.745  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:45.745  		--rc genhtml_branch_coverage=1
00:37:45.745  		--rc genhtml_function_coverage=1
00:37:45.745  		--rc genhtml_legend=1
00:37:45.745  		--rc geninfo_all_blocks=1
00:37:45.745  		--rc geninfo_unexecuted_blocks=1
00:37:45.745  		
00:37:45.745  		'
00:37:45.745    00:19:00 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:37:45.745  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:45.745  		--rc genhtml_branch_coverage=1
00:37:45.745  		--rc genhtml_function_coverage=1
00:37:45.745  		--rc genhtml_legend=1
00:37:45.745  		--rc geninfo_all_blocks=1
00:37:45.745  		--rc geninfo_unexecuted_blocks=1
00:37:45.745  		
00:37:45.745  		'
00:37:45.745    00:19:00 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:37:45.745  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:45.745  		--rc genhtml_branch_coverage=1
00:37:45.745  		--rc genhtml_function_coverage=1
00:37:45.745  		--rc genhtml_legend=1
00:37:45.745  		--rc geninfo_all_blocks=1
00:37:45.745  		--rc geninfo_unexecuted_blocks=1
00:37:45.745  		
00:37:45.745  		'
00:37:45.745    00:19:00 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:37:45.745  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:37:45.745  		--rc genhtml_branch_coverage=1
00:37:45.745  		--rc genhtml_function_coverage=1
00:37:45.745  		--rc genhtml_legend=1
00:37:45.745  		--rc geninfo_all_blocks=1
00:37:45.745  		--rc geninfo_unexecuted_blocks=1
00:37:45.745  		
00:37:45.745  		'
00:37:45.745   00:19:00 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:37:45.745     00:19:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:37:45.745     00:19:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:37:45.745    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:37:45.746    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:37:45.746    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:37:45.746     00:19:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob
00:37:45.746     00:19:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:37:45.746     00:19:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:37:45.746     00:19:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:37:45.746      00:19:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:45.746      00:19:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:45.746      00:19:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:45.746      00:19:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH
00:37:45.746      00:19:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:45.746    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0
00:37:45.746    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:37:45.746    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:37:45.746    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:37:45.746    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:37:45.746    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:37:45.746    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:37:45.746  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:37:45.746    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:37:45.746    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:37:45.746    00:19:00 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0
00:37:45.746   00:19:01 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:37:45.746    00:19:01 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob
00:37:45.746    00:19:01 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:37:45.746    00:19:01 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:37:45.746    00:19:01 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:37:45.746     00:19:01 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:45.746     00:19:01 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:45.746     00:19:01 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:45.746     00:19:01 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH
00:37:45.746     00:19:01 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:37:45.746   00:19:01 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit
00:37:45.746   00:19:01 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:37:45.746   00:19:01 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:37:45.746   00:19:01 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs
00:37:45.746   00:19:01 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no
00:37:45.746   00:19:01 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns
00:37:45.746   00:19:01 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:37:45.746   00:19:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:37:45.746    00:19:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:37:45.746   00:19:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:37:45.746   00:19:01 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:37:45.746   00:19:01 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable
00:37:45.746   00:19:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=()
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=()
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=()
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=()
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=()
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=()
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=()
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:37:51.029  Found 0000:af:00.0 (0x8086 - 0x159b)
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:37:51.029  Found 0000:af:00.1 (0x8086 - 0x159b)
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:37:51.029  Found net devices under 0000:af:00.0: cvl_0_0
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:37:51.029  Found net devices under 0000:af:00.1: cvl_0_1
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:37:51.029   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:37:51.030  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:37:51.030  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms
00:37:51.030  
00:37:51.030  --- 10.0.0.2 ping statistics ---
00:37:51.030  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:37:51.030  rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:37:51.030  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:37:51.030  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms
00:37:51.030  
00:37:51.030  --- 10.0.0.1 ping statistics ---
00:37:51.030  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:37:51.030  rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']'
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:37:51.030   00:19:06 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:37:51.030   00:19:06 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify
00:37:51.030   00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable
00:37:51.030   00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:37:51.288    00:19:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf
00:37:51.288    00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=()
00:37:51.288    00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs
00:37:51.288    00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:37:51.288     00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:37:51.288     00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=()
00:37:51.288     00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs
00:37:51.288     00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:37:51.289      00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh
00:37:51.289      00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:37:51.289     00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:37:51.289     00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0
00:37:51.289    00:19:06 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0
00:37:51.289   00:19:06 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0
00:37:51.289   00:19:06 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']'
00:37:51.289    00:19:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0
00:37:51.289    00:19:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:'
00:37:51.289    00:19:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}'
00:37:55.596   00:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN
00:37:55.596    00:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0
00:37:55.596    00:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:'
00:37:55.596    00:19:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}'
00:37:59.835   00:19:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL
00:37:59.835   00:19:15 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify
00:37:59.835   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:59.835   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:37:59.835   00:19:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt
00:37:59.835   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable
00:37:59.835   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:37:59.835   00:19:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3338742
00:37:59.835   00:19:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc
00:37:59.835   00:19:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT
00:37:59.835   00:19:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3338742
00:37:59.835   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3338742 ']'
00:37:59.835   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:37:59.835   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100
00:37:59.835   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:37:59.835  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:37:59.835   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable
00:37:59.835   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:37:59.835  [2024-12-10 00:19:15.370287] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:37:59.835  [2024-12-10 00:19:15.370336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:37:59.835  [2024-12-10 00:19:15.447126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:37:59.835  [2024-12-10 00:19:15.488941] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:37:59.836  [2024-12-10 00:19:15.488981] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:37:59.836  [2024-12-10 00:19:15.488988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:37:59.836  [2024-12-10 00:19:15.488995] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:37:59.836  [2024-12-10 00:19:15.489000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:37:59.836  [2024-12-10 00:19:15.490666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:37:59.836  [2024-12-10 00:19:15.490703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:37:59.836  [2024-12-10 00:19:15.490810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:37:59.836  [2024-12-10 00:19:15.490810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0
00:37:59.836   00:19:15 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:37:59.836  INFO: Log level set to 20
00:37:59.836  INFO: Requests:
00:37:59.836  {
00:37:59.836    "jsonrpc": "2.0",
00:37:59.836    "method": "nvmf_set_config",
00:37:59.836    "id": 1,
00:37:59.836    "params": {
00:37:59.836      "admin_cmd_passthru": {
00:37:59.836        "identify_ctrlr": true
00:37:59.836      }
00:37:59.836    }
00:37:59.836  }
00:37:59.836  
00:37:59.836  INFO: response:
00:37:59.836  {
00:37:59.836    "jsonrpc": "2.0",
00:37:59.836    "id": 1,
00:37:59.836    "result": true
00:37:59.836  }
00:37:59.836  
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:59.836   00:19:15 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:37:59.836  INFO: Setting log level to 20
00:37:59.836  INFO: Setting log level to 20
00:37:59.836  INFO: Log level set to 20
00:37:59.836  INFO: Log level set to 20
00:37:59.836  INFO: Requests:
00:37:59.836  {
00:37:59.836    "jsonrpc": "2.0",
00:37:59.836    "method": "framework_start_init",
00:37:59.836    "id": 1
00:37:59.836  }
00:37:59.836  
00:37:59.836  INFO: Requests:
00:37:59.836  {
00:37:59.836    "jsonrpc": "2.0",
00:37:59.836    "method": "framework_start_init",
00:37:59.836    "id": 1
00:37:59.836  }
00:37:59.836  
00:37:59.836  [2024-12-10 00:19:15.607288] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled
00:37:59.836  INFO: response:
00:37:59.836  {
00:37:59.836    "jsonrpc": "2.0",
00:37:59.836    "id": 1,
00:37:59.836    "result": true
00:37:59.836  }
00:37:59.836  
00:37:59.836  INFO: response:
00:37:59.836  {
00:37:59.836    "jsonrpc": "2.0",
00:37:59.836    "id": 1,
00:37:59.836    "result": true
00:37:59.836  }
00:37:59.836  
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:59.836   00:19:15 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:37:59.836  INFO: Setting log level to 40
00:37:59.836  INFO: Setting log level to 40
00:37:59.836  INFO: Setting log level to 40
00:37:59.836  [2024-12-10 00:19:15.620588] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:37:59.836   00:19:15 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:37:59.836   00:19:15 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:37:59.836   00:19:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:38:03.129  Nvme0n1
00:38:03.129   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:03.129   00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1
00:38:03.129   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:03.129   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:38:03.129   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:03.129   00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1
00:38:03.129   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:03.129   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:38:03.129   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:03.129   00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:38:03.129   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:03.129   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:38:03.129  [2024-12-10 00:19:18.530438] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:38:03.129   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:03.129   00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems
00:38:03.129   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:03.129   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:38:03.129  [
00:38:03.129  {
00:38:03.129  "nqn": "nqn.2014-08.org.nvmexpress.discovery",
00:38:03.129  "subtype": "Discovery",
00:38:03.129  "listen_addresses": [],
00:38:03.129  "allow_any_host": true,
00:38:03.129  "hosts": []
00:38:03.129  },
00:38:03.129  {
00:38:03.129  "nqn": "nqn.2016-06.io.spdk:cnode1",
00:38:03.129  "subtype": "NVMe",
00:38:03.129  "listen_addresses": [
00:38:03.129  {
00:38:03.129  "trtype": "TCP",
00:38:03.129  "adrfam": "IPv4",
00:38:03.129  "traddr": "10.0.0.2",
00:38:03.129  "trsvcid": "4420"
00:38:03.129  }
00:38:03.129  ],
00:38:03.129  "allow_any_host": true,
00:38:03.129  "hosts": [],
00:38:03.129  "serial_number": "SPDK00000000000001",
00:38:03.129  "model_number": "SPDK bdev Controller",
00:38:03.129  "max_namespaces": 1,
00:38:03.129  "min_cntlid": 1,
00:38:03.129  "max_cntlid": 65519,
00:38:03.129  "namespaces": [
00:38:03.129  {
00:38:03.129  "nsid": 1,
00:38:03.129  "bdev_name": "Nvme0n1",
00:38:03.129  "name": "Nvme0n1",
00:38:03.129  "nguid": "99884A00256E4327A20D2CED4521F499",
00:38:03.129  "uuid": "99884a00-256e-4327-a20d-2ced4521f499"
00:38:03.129  }
00:38:03.129  ]
00:38:03.129  }
00:38:03.129  ]
00:38:03.129   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:03.129    00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1'
00:38:03.129    00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:'
00:38:03.129    00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}'
00:38:03.129   00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN
00:38:03.130    00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r '        trtype:tcp         adrfam:IPv4         traddr:10.0.0.2         trsvcid:4420         subnqn:nqn.2016-06.io.spdk:cnode1'
00:38:03.130    00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:'
00:38:03.130    00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}'
00:38:03.130   00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL
00:38:03.130   00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']'
00:38:03.130   00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']'
00:38:03.130   00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:38:03.130   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:03.130   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:38:03.130   00:19:18 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:03.130   00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT
00:38:03.130   00:19:18 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini
00:38:03.130   00:19:18 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup
00:38:03.130   00:19:18 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync
00:38:03.130   00:19:18 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:38:03.130   00:19:18 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e
00:38:03.130   00:19:18 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20}
00:38:03.130   00:19:18 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:38:03.130  rmmod nvme_tcp
00:38:03.387  rmmod nvme_fabrics
00:38:03.387  rmmod nvme_keyring
00:38:03.387   00:19:19 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:38:03.387   00:19:19 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e
00:38:03.387   00:19:19 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0
00:38:03.387   00:19:19 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3338742 ']'
00:38:03.387   00:19:19 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3338742
00:38:03.387   00:19:19 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3338742 ']'
00:38:03.387   00:19:19 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3338742
00:38:03.387    00:19:19 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname
00:38:03.387   00:19:19 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:38:03.387    00:19:19 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3338742
00:38:03.387   00:19:19 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:38:03.387   00:19:19 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:38:03.387   00:19:19 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3338742'
00:38:03.387  killing process with pid 3338742
00:38:03.387   00:19:19 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3338742
00:38:03.387   00:19:19 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3338742
00:38:04.761   00:19:20 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']'
00:38:04.761   00:19:20 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:38:04.761   00:19:20 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:38:04.761   00:19:20 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr
00:38:04.761   00:19:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save
00:38:04.761   00:19:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:38:04.761   00:19:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore
00:38:04.761   00:19:20 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:38:04.761   00:19:20 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns
00:38:04.761   00:19:20 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:38:04.761   00:19:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:38:04.761    00:19:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:38:07.311   00:19:22 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:38:07.311  
00:38:07.311  real	0m21.854s
00:38:07.311  user	0m26.922s
00:38:07.311  sys	0m6.264s
00:38:07.311   00:19:22 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable
00:38:07.311   00:19:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x
00:38:07.311  ************************************
00:38:07.311  END TEST nvmf_identify_passthru
00:38:07.311  ************************************
00:38:07.311   00:19:22  -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh
00:38:07.311   00:19:22  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:38:07.311   00:19:22  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:38:07.311   00:19:22  -- common/autotest_common.sh@10 -- # set +x
00:38:07.311  ************************************
00:38:07.311  START TEST nvmf_dif
00:38:07.311  ************************************
00:38:07.311   00:19:22 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh
00:38:07.311  * Looking for test storage...
00:38:07.311  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:38:07.311    00:19:22 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:38:07.311     00:19:22 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version
00:38:07.311     00:19:22 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:38:07.311    00:19:22 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-:
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-:
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<'
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@345 -- # : 1
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 ))
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:38:07.311     00:19:22 nvmf_dif -- scripts/common.sh@365 -- # decimal 1
00:38:07.311     00:19:22 nvmf_dif -- scripts/common.sh@353 -- # local d=1
00:38:07.311     00:19:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:38:07.311     00:19:22 nvmf_dif -- scripts/common.sh@355 -- # echo 1
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1
00:38:07.311     00:19:22 nvmf_dif -- scripts/common.sh@366 -- # decimal 2
00:38:07.311     00:19:22 nvmf_dif -- scripts/common.sh@353 -- # local d=2
00:38:07.311     00:19:22 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:38:07.311     00:19:22 nvmf_dif -- scripts/common.sh@355 -- # echo 2
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:38:07.311    00:19:22 nvmf_dif -- scripts/common.sh@368 -- # return 0
00:38:07.311    00:19:22 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:38:07.311    00:19:22 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:38:07.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:38:07.311  		--rc genhtml_branch_coverage=1
00:38:07.311  		--rc genhtml_function_coverage=1
00:38:07.311  		--rc genhtml_legend=1
00:38:07.311  		--rc geninfo_all_blocks=1
00:38:07.311  		--rc geninfo_unexecuted_blocks=1
00:38:07.311  		
00:38:07.311  		'
00:38:07.311    00:19:22 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:38:07.311  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:38:07.311  		--rc genhtml_branch_coverage=1
00:38:07.311  		--rc genhtml_function_coverage=1
00:38:07.311  		--rc genhtml_legend=1
00:38:07.312  		--rc geninfo_all_blocks=1
00:38:07.312  		--rc geninfo_unexecuted_blocks=1
00:38:07.312  		
00:38:07.312  		'
00:38:07.312    00:19:22 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:38:07.312  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:38:07.312  		--rc genhtml_branch_coverage=1
00:38:07.312  		--rc genhtml_function_coverage=1
00:38:07.312  		--rc genhtml_legend=1
00:38:07.312  		--rc geninfo_all_blocks=1
00:38:07.312  		--rc geninfo_unexecuted_blocks=1
00:38:07.312  		
00:38:07.312  		'
00:38:07.312    00:19:22 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:38:07.312  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:38:07.312  		--rc genhtml_branch_coverage=1
00:38:07.312  		--rc genhtml_function_coverage=1
00:38:07.312  		--rc genhtml_legend=1
00:38:07.312  		--rc geninfo_all_blocks=1
00:38:07.312  		--rc geninfo_unexecuted_blocks=1
00:38:07.312  		
00:38:07.312  		'
00:38:07.312   00:19:22 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:38:07.312     00:19:22 nvmf_dif -- nvmf/common.sh@7 -- # uname -s
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:38:07.312     00:19:22 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:38:07.312     00:19:22 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob
00:38:07.312     00:19:22 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:38:07.312     00:19:22 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:38:07.312     00:19:22 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:38:07.312      00:19:22 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:38:07.312      00:19:22 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:38:07.312      00:19:22 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:38:07.312      00:19:22 nvmf_dif -- paths/export.sh@5 -- # export PATH
00:38:07.312      00:19:22 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@51 -- # : 0
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:38:07.312  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:38:07.312    00:19:22 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0
00:38:07.312   00:19:22 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16
00:38:07.312   00:19:22 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512
00:38:07.312   00:19:22 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64
00:38:07.312   00:19:22 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1
00:38:07.312   00:19:22 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit
00:38:07.312   00:19:22 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:38:07.312   00:19:22 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:38:07.312   00:19:22 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs
00:38:07.312   00:19:22 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no
00:38:07.312   00:19:22 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns
00:38:07.312   00:19:22 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:38:07.312   00:19:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:38:07.312    00:19:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:38:07.312   00:19:22 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:38:07.312   00:19:22 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:38:07.312   00:19:22 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable
00:38:07.312   00:19:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=()
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=()
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=()
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=()
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@320 -- # e810=()
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@321 -- # x722=()
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@322 -- # mlx=()
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:38:13.879  Found 0000:af:00.0 (0x8086 - 0x159b)
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:38:13.879  Found 0000:af:00.1 (0x8086 - 0x159b)
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:38:13.879  Found net devices under 0000:af:00.0: cvl_0_0
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:38:13.879  Found net devices under 0000:af:00.1: cvl_0_1
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:38:13.879  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:38:13.879  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms
00:38:13.879  
00:38:13.879  --- 10.0.0.2 ping statistics ---
00:38:13.879  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:38:13.879  rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:38:13.879  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:38:13.879  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms
00:38:13.879  
00:38:13.879  --- 10.0.0.1 ping statistics ---
00:38:13.879  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:38:13.879  rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@450 -- # return 0
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']'
00:38:13.879   00:19:28 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:38:15.812  0000:00:04.7 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver
00:38:15.812  0000:00:04.6 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:00:04.5 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:00:04.4 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:00:04.3 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:00:04.2 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:00:04.1 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:00:04.0 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:80:04.7 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:80:04.6 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:80:04.5 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:80:04.4 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:80:04.3 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:80:04.2 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:80:04.1 (8086 2021): Already using the vfio-pci driver
00:38:15.812  0000:80:04.0 (8086 2021): Already using the vfio-pci driver
00:38:15.812   00:19:31 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:38:15.812   00:19:31 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:38:15.812   00:19:31 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:38:15.812   00:19:31 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:38:15.812   00:19:31 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:38:15.812   00:19:31 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:38:15.812   00:19:31 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip'
00:38:15.812   00:19:31 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart
00:38:15.812   00:19:31 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:38:15.812   00:19:31 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable
00:38:15.812   00:19:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:38:15.812   00:19:31 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3344307
00:38:15.812   00:19:31 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3344307
00:38:15.812   00:19:31 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF
00:38:15.812   00:19:31 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3344307 ']'
00:38:15.812   00:19:31 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:38:15.812   00:19:31 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100
00:38:15.812   00:19:31 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:38:15.812  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:38:15.812   00:19:31 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable
00:38:15.812   00:19:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:38:16.071  [2024-12-10 00:19:31.703028] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:38:16.071  [2024-12-10 00:19:31.703070] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:38:16.071  [2024-12-10 00:19:31.781759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:38:16.071  [2024-12-10 00:19:31.821468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:38:16.071  [2024-12-10 00:19:31.821504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:38:16.071  [2024-12-10 00:19:31.821511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:38:16.071  [2024-12-10 00:19:31.821516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:38:16.071  [2024-12-10 00:19:31.821521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:38:16.071  [2024-12-10 00:19:31.822002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:38:16.071   00:19:31 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:38:16.071   00:19:31 nvmf_dif -- common/autotest_common.sh@868 -- # return 0
00:38:16.071   00:19:31 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:38:16.071   00:19:31 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable
00:38:16.071   00:19:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:38:16.331   00:19:31 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:38:16.331   00:19:31 nvmf_dif -- target/dif.sh@139 -- # create_transport
00:38:16.331   00:19:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip
00:38:16.331   00:19:31 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:16.331   00:19:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:38:16.331  [2024-12-10 00:19:31.957487] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:38:16.331   00:19:31 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:16.331   00:19:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1
00:38:16.331   00:19:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:38:16.331   00:19:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable
00:38:16.331   00:19:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:38:16.331  ************************************
00:38:16.331  START TEST fio_dif_1_default
00:38:16.331  ************************************
00:38:16.331   00:19:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1
00:38:16.331   00:19:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0
00:38:16.331   00:19:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub
00:38:16.331   00:19:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@"
00:38:16.331   00:19:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0
00:38:16.331   00:19:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0
00:38:16.331   00:19:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1
00:38:16.331   00:19:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:16.331   00:19:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:38:16.331  bdev_null0
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:38:16.331  [2024-12-10 00:19:32.029797] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=()
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:38:16.331  {
00:38:16.331    "params": {
00:38:16.331      "name": "Nvme$subsystem",
00:38:16.331      "trtype": "$TEST_TRANSPORT",
00:38:16.331      "traddr": "$NVMF_FIRST_TARGET_IP",
00:38:16.331      "adrfam": "ipv4",
00:38:16.331      "trsvcid": "$NVMF_PORT",
00:38:16.331      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:38:16.331      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:38:16.331      "hdgst": ${hdgst:-false},
00:38:16.331      "ddgst": ${ddgst:-false}
00:38:16.331    },
00:38:16.331    "method": "bdev_nvme_attach_controller"
00:38:16.331  }
00:38:16.331  EOF
00:38:16.331  )")
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib=
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:38:16.331     00:19:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 ))
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files ))
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq .
00:38:16.331     00:19:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=,
00:38:16.331     00:19:32 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:38:16.331    "params": {
00:38:16.331      "name": "Nvme0",
00:38:16.331      "trtype": "tcp",
00:38:16.331      "traddr": "10.0.0.2",
00:38:16.331      "adrfam": "ipv4",
00:38:16.331      "trsvcid": "4420",
00:38:16.331      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:38:16.331      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:38:16.331      "hdgst": false,
00:38:16.331      "ddgst": false
00:38:16.331    },
00:38:16.331    "method": "bdev_nvme_attach_controller"
00:38:16.331  }'
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:38:16.331    00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev'
00:38:16.331   00:19:32 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:16.589  filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4
00:38:16.589  fio-3.35
00:38:16.589  Starting 1 thread
00:38:28.806  
00:38:28.806  filename0: (groupid=0, jobs=1): err= 0: pid=3344621: Tue Dec 10 00:19:42 2024
00:38:28.806    read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec)
00:38:28.806      slat (nsec): min=5829, max=25590, avg=6184.78, stdev=1007.47
00:38:28.806      clat (usec): min=40763, max=43342, avg=41000.91, stdev=179.81
00:38:28.806       lat (usec): min=40769, max=43367, avg=41007.09, stdev=180.24
00:38:28.806      clat percentiles (usec):
00:38:28.806       |  1.00th=[40633],  5.00th=[40633], 10.00th=[41157], 20.00th=[41157],
00:38:28.806       | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
00:38:28.806       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:38:28.806       | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254],
00:38:28.806       | 99.99th=[43254]
00:38:28.806     bw (  KiB/s): min=  384, max=  416, per=99.47%, avg=388.80, stdev=11.72, samples=20
00:38:28.806     iops        : min=   96, max=  104, avg=97.20, stdev= 2.93, samples=20
00:38:28.806    lat (msec)   : 50=100.00%
00:38:28.806    cpu          : usr=92.11%, sys=7.65%, ctx=12, majf=0, minf=0
00:38:28.806    IO depths    : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:38:28.806       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:28.806       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:28.806       issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:28.806       latency   : target=0, window=0, percentile=100.00%, depth=4
00:38:28.806  
00:38:28.806  Run status group 0 (all jobs):
00:38:28.806     READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10009-10009msec
00:38:28.806   00:19:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0
00:38:28.806   00:19:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub
00:38:28.806   00:19:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@"
00:38:28.806   00:19:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0
00:38:28.806   00:19:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0
00:38:28.806   00:19:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:38:28.806   00:19:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:28.806   00:19:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:38:28.806   00:19:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:28.806   00:19:43 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:38:28.806   00:19:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:28.806   00:19:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:38:28.806   00:19:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:28.807  
00:38:28.807  real	0m11.099s
00:38:28.807  user	0m16.181s
00:38:28.807  sys	0m1.077s
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x
00:38:28.807  ************************************
00:38:28.807  END TEST fio_dif_1_default
00:38:28.807  ************************************
00:38:28.807   00:19:43 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems
00:38:28.807   00:19:43 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:38:28.807   00:19:43 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable
00:38:28.807   00:19:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:38:28.807  ************************************
00:38:28.807  START TEST fio_dif_1_multi_subsystems
00:38:28.807  ************************************
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@"
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:38:28.807  bdev_null0
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:38:28.807  [2024-12-10 00:19:43.202778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@"
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:38:28.807  bdev_null1
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=()
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:38:28.807  {
00:38:28.807    "params": {
00:38:28.807      "name": "Nvme$subsystem",
00:38:28.807      "trtype": "$TEST_TRANSPORT",
00:38:28.807      "traddr": "$NVMF_FIRST_TARGET_IP",
00:38:28.807      "adrfam": "ipv4",
00:38:28.807      "trsvcid": "$NVMF_PORT",
00:38:28.807      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:38:28.807      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:38:28.807      "hdgst": ${hdgst:-false},
00:38:28.807      "ddgst": ${ddgst:-false}
00:38:28.807    },
00:38:28.807    "method": "bdev_nvme_attach_controller"
00:38:28.807  }
00:38:28.807  EOF
00:38:28.807  )")
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift
00:38:28.807     00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib=
00:38:28.807   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 ))
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files ))
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:38:28.807    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:38:28.807  {
00:38:28.807    "params": {
00:38:28.807      "name": "Nvme$subsystem",
00:38:28.807      "trtype": "$TEST_TRANSPORT",
00:38:28.807      "traddr": "$NVMF_FIRST_TARGET_IP",
00:38:28.807      "adrfam": "ipv4",
00:38:28.807      "trsvcid": "$NVMF_PORT",
00:38:28.808      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:38:28.808      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:38:28.808      "hdgst": ${hdgst:-false},
00:38:28.808      "ddgst": ${ddgst:-false}
00:38:28.808    },
00:38:28.808    "method": "bdev_nvme_attach_controller"
00:38:28.808  }
00:38:28.808  EOF
00:38:28.808  )")
00:38:28.808     00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat
00:38:28.808    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ ))
00:38:28.808    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files ))
00:38:28.808    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq .
00:38:28.808     00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=,
00:38:28.808     00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:38:28.808    "params": {
00:38:28.808      "name": "Nvme0",
00:38:28.808      "trtype": "tcp",
00:38:28.808      "traddr": "10.0.0.2",
00:38:28.808      "adrfam": "ipv4",
00:38:28.808      "trsvcid": "4420",
00:38:28.808      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:38:28.808      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:38:28.808      "hdgst": false,
00:38:28.808      "ddgst": false
00:38:28.808    },
00:38:28.808    "method": "bdev_nvme_attach_controller"
00:38:28.808  },{
00:38:28.808    "params": {
00:38:28.808      "name": "Nvme1",
00:38:28.808      "trtype": "tcp",
00:38:28.808      "traddr": "10.0.0.2",
00:38:28.808      "adrfam": "ipv4",
00:38:28.808      "trsvcid": "4420",
00:38:28.808      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:38:28.808      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:38:28.808      "hdgst": false,
00:38:28.808      "ddgst": false
00:38:28.808    },
00:38:28.808    "method": "bdev_nvme_attach_controller"
00:38:28.808  }'
00:38:28.808   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=
00:38:28.808   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:38:28.808   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:38:28.808    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:28.808    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:38:28.808    00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:38:28.808   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=
00:38:28.808   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:38:28.808   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev'
00:38:28.808   00:19:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:28.808  filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4
00:38:28.808  filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4
00:38:28.808  fio-3.35
00:38:28.808  Starting 2 threads
00:38:38.791  
00:38:38.791  filename0: (groupid=0, jobs=1): err= 0: pid=3346587: Tue Dec 10 00:19:54 2024
00:38:38.791    read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10010msec)
00:38:38.791      slat (nsec): min=5921, max=29016, avg=7659.91, stdev=2681.58
00:38:38.791      clat (usec): min=369, max=42046, avg=40833.80, stdev=2596.30
00:38:38.791       lat (usec): min=375, max=42058, avg=40841.46, stdev=2596.33
00:38:38.791      clat percentiles (usec):
00:38:38.791       |  1.00th=[40633],  5.00th=[40633], 10.00th=[41157], 20.00th=[41157],
00:38:38.791       | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157],
00:38:38.791       | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157],
00:38:38.791       | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206],
00:38:38.791       | 99.99th=[42206]
00:38:38.791     bw (  KiB/s): min=  384, max=  416, per=32.70%, avg=390.40, stdev=13.13, samples=20
00:38:38.791     iops        : min=   96, max=  104, avg=97.60, stdev= 3.28, samples=20
00:38:38.791    lat (usec)   : 500=0.41%
00:38:38.791    lat (msec)   : 50=99.59%
00:38:38.791    cpu          : usr=96.72%, sys=3.03%, ctx=14, majf=0, minf=107
00:38:38.791    IO depths    : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:38:38.791       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:38.791       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:38.791       issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:38.791       latency   : target=0, window=0, percentile=100.00%, depth=4
00:38:38.791  filename1: (groupid=0, jobs=1): err= 0: pid=3346588: Tue Dec 10 00:19:54 2024
00:38:38.791    read: IOPS=200, BW=802KiB/s (821kB/s)(8048KiB/10035msec)
00:38:38.791      slat (nsec): min=5917, max=28624, avg=6975.64, stdev=1959.41
00:38:38.791      clat (usec): min=395, max=42669, avg=19930.10, stdev=20509.89
00:38:38.791       lat (usec): min=401, max=42676, avg=19937.08, stdev=20509.38
00:38:38.791      clat percentiles (usec):
00:38:38.791       |  1.00th=[  412],  5.00th=[  449], 10.00th=[  461], 20.00th=[  482],
00:38:38.791       | 30.00th=[  490], 40.00th=[  502], 50.00th=[  619], 60.00th=[41157],
00:38:38.791       | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206],
00:38:38.791       | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730],
00:38:38.791       | 99.99th=[42730]
00:38:38.791     bw (  KiB/s): min=  704, max=  960, per=67.33%, avg=803.20, stdev=73.34, samples=20
00:38:38.792     iops        : min=  176, max=  240, avg=200.80, stdev=18.33, samples=20
00:38:38.792    lat (usec)   : 500=38.52%, 750=14.17%
00:38:38.792    lat (msec)   : 50=47.32%
00:38:38.792    cpu          : usr=96.65%, sys=3.10%, ctx=14, majf=0, minf=65
00:38:38.792    IO depths    : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:38:38.792       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:38.792       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:38.792       issued rwts: total=2012,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:38.792       latency   : target=0, window=0, percentile=100.00%, depth=4
00:38:38.792  
00:38:38.792  Run status group 0 (all jobs):
00:38:38.792     READ: bw=1193KiB/s (1221kB/s), 392KiB/s-802KiB/s (401kB/s-821kB/s), io=11.7MiB (12.3MB), run=10010-10035msec
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@"
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@"
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:38.792  
00:38:38.792  real	0m11.432s
00:38:38.792  user	0m26.719s
00:38:38.792  sys	0m0.905s
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable
00:38:38.792   00:19:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x
00:38:38.792  ************************************
00:38:38.792  END TEST fio_dif_1_multi_subsystems
00:38:38.792  ************************************
00:38:38.792   00:19:54 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params
00:38:38.792   00:19:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:38:38.792   00:19:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable
00:38:38.792   00:19:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:38:39.051  ************************************
00:38:39.051  START TEST fio_dif_rand_params
00:38:39.051  ************************************
00:38:39.051   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params
00:38:39.051   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF
00:38:39.051   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files
00:38:39.051   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3
00:38:39.051   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k
00:38:39.051   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3
00:38:39.051   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3
00:38:39.051   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5
00:38:39.051   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0
00:38:39.051   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub
00:38:39.051   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:38:39.051   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0
00:38:39.051   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:39.052  bdev_null0
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:39.052  [2024-12-10 00:19:54.713086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=()
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:38:39.052  {
00:38:39.052    "params": {
00:38:39.052      "name": "Nvme$subsystem",
00:38:39.052      "trtype": "$TEST_TRANSPORT",
00:38:39.052      "traddr": "$NVMF_FIRST_TARGET_IP",
00:38:39.052      "adrfam": "ipv4",
00:38:39.052      "trsvcid": "$NVMF_PORT",
00:38:39.052      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:38:39.052      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:38:39.052      "hdgst": ${hdgst:-false},
00:38:39.052      "ddgst": ${ddgst:-false}
00:38:39.052    },
00:38:39.052    "method": "bdev_nvme_attach_controller"
00:38:39.052  }
00:38:39.052  EOF
00:38:39.052  )")
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib=
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:38:39.052     00:19:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 ))
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq .
00:38:39.052     00:19:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=,
00:38:39.052     00:19:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:38:39.052    "params": {
00:38:39.052      "name": "Nvme0",
00:38:39.052      "trtype": "tcp",
00:38:39.052      "traddr": "10.0.0.2",
00:38:39.052      "adrfam": "ipv4",
00:38:39.052      "trsvcid": "4420",
00:38:39.052      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:38:39.052      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:38:39.052      "hdgst": false,
00:38:39.052      "ddgst": false
00:38:39.052    },
00:38:39.052    "method": "bdev_nvme_attach_controller"
00:38:39.052  }'
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:38:39.052    00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev'
00:38:39.052   00:19:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:39.311  filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3
00:38:39.311  ...
00:38:39.311  fio-3.35
00:38:39.311  Starting 3 threads
00:38:45.871  
00:38:45.871  filename0: (groupid=0, jobs=1): err= 0: pid=3348470: Tue Dec 10 00:20:00 2024
00:38:45.871    read: IOPS=305, BW=38.2MiB/s (40.1MB/s)(193MiB/5047msec)
00:38:45.871      slat (nsec): min=6141, max=26125, avg=10639.71, stdev=1975.28
00:38:45.871      clat (usec): min=3394, max=51528, avg=9772.12, stdev=6502.09
00:38:45.871       lat (usec): min=3400, max=51540, avg=9782.76, stdev=6502.05
00:38:45.871      clat percentiles (usec):
00:38:45.871       |  1.00th=[ 3687],  5.00th=[ 6194], 10.00th=[ 6718], 20.00th=[ 7898],
00:38:45.871       | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241],
00:38:45.871       | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11076],
00:38:45.871       | 99.00th=[49021], 99.50th=[49546], 99.90th=[51119], 99.95th=[51643],
00:38:45.871       | 99.99th=[51643]
00:38:45.871     bw (  KiB/s): min=28928, max=45312, per=34.00%, avg=39424.00, stdev=6086.85, samples=10
00:38:45.871     iops        : min=  226, max=  354, avg=308.00, stdev=47.55, samples=10
00:38:45.871    lat (msec)   : 4=1.43%, 10=80.36%, 20=15.55%, 50=2.33%, 100=0.32%
00:38:45.871    cpu          : usr=94.37%, sys=5.35%, ctx=13, majf=0, minf=11
00:38:45.871    IO depths    : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:38:45.871       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:45.871       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:45.871       issued rwts: total=1543,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:45.871       latency   : target=0, window=0, percentile=100.00%, depth=3
00:38:45.871  filename0: (groupid=0, jobs=1): err= 0: pid=3348471: Tue Dec 10 00:20:00 2024
00:38:45.871    read: IOPS=302, BW=37.8MiB/s (39.6MB/s)(191MiB/5045msec)
00:38:45.871      slat (nsec): min=6164, max=72490, avg=10619.03, stdev=2553.58
00:38:45.871      clat (usec): min=3355, max=51535, avg=9888.97, stdev=5606.11
00:38:45.871       lat (usec): min=3366, max=51547, avg=9899.59, stdev=5606.13
00:38:45.871      clat percentiles (usec):
00:38:45.871       |  1.00th=[ 3687],  5.00th=[ 6063], 10.00th=[ 6521], 20.00th=[ 7963],
00:38:45.871       | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9765],
00:38:45.871       | 70.00th=[10159], 80.00th=[10683], 90.00th=[11469], 95.00th=[11994],
00:38:45.871       | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[51643],
00:38:45.871       | 99.99th=[51643]
00:38:45.871     bw (  KiB/s): min=26112, max=45824, per=33.60%, avg=38963.20, stdev=5258.64, samples=10
00:38:45.871     iops        : min=  204, max=  358, avg=304.40, stdev=41.08, samples=10
00:38:45.871    lat (msec)   : 4=2.23%, 10=64.63%, 20=31.23%, 50=1.38%, 100=0.52%
00:38:45.871    cpu          : usr=94.75%, sys=4.96%, ctx=7, majf=0, minf=9
00:38:45.871    IO depths    : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:38:45.871       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:45.871       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:45.871       issued rwts: total=1524,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:45.871       latency   : target=0, window=0, percentile=100.00%, depth=3
00:38:45.871  filename0: (groupid=0, jobs=1): err= 0: pid=3348472: Tue Dec 10 00:20:00 2024
00:38:45.871    read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(188MiB/5044msec)
00:38:45.871      slat (nsec): min=6125, max=25921, avg=10816.98, stdev=2061.38
00:38:45.871      clat (usec): min=2869, max=53131, avg=10040.93, stdev=6218.36
00:38:45.871       lat (usec): min=2875, max=53143, avg=10051.75, stdev=6218.44
00:38:45.871      clat percentiles (usec):
00:38:45.871       |  1.00th=[ 3654],  5.00th=[ 5866], 10.00th=[ 6718], 20.00th=[ 8094],
00:38:45.871       | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765],
00:38:45.871       | 70.00th=[10159], 80.00th=[10552], 90.00th=[11207], 95.00th=[11731],
00:38:45.871       | 99.00th=[49021], 99.50th=[50070], 99.90th=[51643], 99.95th=[53216],
00:38:45.871       | 99.99th=[53216]
00:38:45.871     bw (  KiB/s): min=32256, max=43776, per=33.16%, avg=38451.20, stdev=3721.16, samples=10
00:38:45.871     iops        : min=  252, max=  342, avg=300.40, stdev=29.07, samples=10
00:38:45.871    lat (msec)   : 4=2.52%, 10=64.98%, 20=30.10%, 50=1.86%, 100=0.53%
00:38:45.871    cpu          : usr=94.73%, sys=4.98%, ctx=10, majf=0, minf=9
00:38:45.871    IO depths    : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:38:45.871       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:45.871       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:45.871       issued rwts: total=1505,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:45.871       latency   : target=0, window=0, percentile=100.00%, depth=3
00:38:45.871  
00:38:45.871  Run status group 0 (all jobs):
00:38:45.871     READ: bw=113MiB/s (119MB/s), 37.3MiB/s-38.2MiB/s (39.1MB/s-40.1MB/s), io=572MiB (599MB), run=5044-5047msec
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime=
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.871  bdev_null0
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.871  [2024-12-10 00:20:00.927185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.871  bdev_null1
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.871  bdev_null2
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.871   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2
00:38:45.872   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.872   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.872   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.872   00:20:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420
00:38:45.872   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:45.872   00:20:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=()
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:38:45.872  {
00:38:45.872    "params": {
00:38:45.872      "name": "Nvme$subsystem",
00:38:45.872      "trtype": "$TEST_TRANSPORT",
00:38:45.872      "traddr": "$NVMF_FIRST_TARGET_IP",
00:38:45.872      "adrfam": "ipv4",
00:38:45.872      "trsvcid": "$NVMF_PORT",
00:38:45.872      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:38:45.872      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:38:45.872      "hdgst": ${hdgst:-false},
00:38:45.872      "ddgst": ${ddgst:-false}
00:38:45.872    },
00:38:45.872    "method": "bdev_nvme_attach_controller"
00:38:45.872  }
00:38:45.872  EOF
00:38:45.872  )")
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib=
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:38:45.872     00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 ))
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:38:45.872  {
00:38:45.872    "params": {
00:38:45.872      "name": "Nvme$subsystem",
00:38:45.872      "trtype": "$TEST_TRANSPORT",
00:38:45.872      "traddr": "$NVMF_FIRST_TARGET_IP",
00:38:45.872      "adrfam": "ipv4",
00:38:45.872      "trsvcid": "$NVMF_PORT",
00:38:45.872      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:38:45.872      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:38:45.872      "hdgst": ${hdgst:-false},
00:38:45.872      "ddgst": ${ddgst:-false}
00:38:45.872    },
00:38:45.872    "method": "bdev_nvme_attach_controller"
00:38:45.872  }
00:38:45.872  EOF
00:38:45.872  )")
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ ))
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat
00:38:45.872     00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ ))
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:38:45.872  {
00:38:45.872    "params": {
00:38:45.872      "name": "Nvme$subsystem",
00:38:45.872      "trtype": "$TEST_TRANSPORT",
00:38:45.872      "traddr": "$NVMF_FIRST_TARGET_IP",
00:38:45.872      "adrfam": "ipv4",
00:38:45.872      "trsvcid": "$NVMF_PORT",
00:38:45.872      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:38:45.872      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:38:45.872      "hdgst": ${hdgst:-false},
00:38:45.872      "ddgst": ${ddgst:-false}
00:38:45.872    },
00:38:45.872    "method": "bdev_nvme_attach_controller"
00:38:45.872  }
00:38:45.872  EOF
00:38:45.872  )")
00:38:45.872     00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq .
00:38:45.872     00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=,
00:38:45.872     00:20:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:38:45.872    "params": {
00:38:45.872      "name": "Nvme0",
00:38:45.872      "trtype": "tcp",
00:38:45.872      "traddr": "10.0.0.2",
00:38:45.872      "adrfam": "ipv4",
00:38:45.872      "trsvcid": "4420",
00:38:45.872      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:38:45.872      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:38:45.872      "hdgst": false,
00:38:45.872      "ddgst": false
00:38:45.872    },
00:38:45.872    "method": "bdev_nvme_attach_controller"
00:38:45.872  },{
00:38:45.872    "params": {
00:38:45.872      "name": "Nvme1",
00:38:45.872      "trtype": "tcp",
00:38:45.872      "traddr": "10.0.0.2",
00:38:45.872      "adrfam": "ipv4",
00:38:45.872      "trsvcid": "4420",
00:38:45.872      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:38:45.872      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:38:45.872      "hdgst": false,
00:38:45.872      "ddgst": false
00:38:45.872    },
00:38:45.872    "method": "bdev_nvme_attach_controller"
00:38:45.872  },{
00:38:45.872    "params": {
00:38:45.872      "name": "Nvme2",
00:38:45.872      "trtype": "tcp",
00:38:45.872      "traddr": "10.0.0.2",
00:38:45.872      "adrfam": "ipv4",
00:38:45.872      "trsvcid": "4420",
00:38:45.872      "subnqn": "nqn.2016-06.io.spdk:cnode2",
00:38:45.872      "hostnqn": "nqn.2016-06.io.spdk:host2",
00:38:45.872      "hdgst": false,
00:38:45.872      "ddgst": false
00:38:45.872    },
00:38:45.872    "method": "bdev_nvme_attach_controller"
00:38:45.872  }'
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:38:45.872    00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev'
00:38:45.872   00:20:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:45.872  filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16
00:38:45.872  ...
00:38:45.872  filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16
00:38:45.872  ...
00:38:45.872  filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16
00:38:45.872  ...
00:38:45.872  fio-3.35
00:38:45.873  Starting 24 threads
00:38:58.104  
00:38:58.105  filename0: (groupid=0, jobs=1): err= 0: pid=3349532: Tue Dec 10 00:20:12 2024
00:38:58.105    read: IOPS=592, BW=2370KiB/s (2427kB/s)(23.2MiB/10017msec)
00:38:58.105      slat (usec): min=5, max=124, avg=32.09, stdev=19.63
00:38:58.105      clat (usec): min=10574, max=35004, avg=26724.69, stdev=1981.98
00:38:58.105       lat (usec): min=10625, max=35020, avg=26756.78, stdev=1977.88
00:38:58.105      clat percentiles (usec):
00:38:58.105       |  1.00th=[23725],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.105       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870],
00:38:58.105       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29754], 95.00th=[30540],
00:38:58.105       | 99.00th=[31065], 99.50th=[31065], 99.90th=[31589], 99.95th=[31589],
00:38:58.105       | 99.99th=[34866]
00:38:58.105     bw (  KiB/s): min= 2171, max= 2560, per=4.16%, avg=2370.26, stdev=130.83, samples=19
00:38:58.105     iops        : min=  542, max=  640, avg=592.42, stdev=32.78, samples=19
00:38:58.105    lat (msec)   : 20=0.27%, 50=99.73%
00:38:58.105    cpu          : usr=98.49%, sys=0.99%, ctx=70, majf=0, minf=113
00:38:58.105    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:38:58.105       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.105       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.105       issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.105       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.105  filename0: (groupid=0, jobs=1): err= 0: pid=3349533: Tue Dec 10 00:20:12 2024
00:38:58.105    read: IOPS=591, BW=2368KiB/s (2424kB/s)(23.1MiB/10002msec)
00:38:58.105      slat (nsec): min=5763, max=82495, avg=37895.90, stdev=16225.75
00:38:58.105      clat (usec): min=11742, max=50980, avg=26704.39, stdev=2138.06
00:38:58.105       lat (usec): min=11771, max=50996, avg=26742.28, stdev=2138.59
00:38:58.105      clat percentiles (usec):
00:38:58.105       |  1.00th=[24249],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.105       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870],
00:38:58.105       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.105       | 99.00th=[30802], 99.50th=[31065], 99.90th=[41157], 99.95th=[41157],
00:38:58.105       | 99.99th=[51119]
00:38:58.105     bw (  KiB/s): min= 2176, max= 2560, per=4.15%, avg=2364.05, stdev=130.41, samples=19
00:38:58.105     iops        : min=  544, max=  640, avg=590.89, stdev=32.62, samples=19
00:38:58.105    lat (msec)   : 20=0.30%, 50=99.66%, 100=0.03%
00:38:58.105    cpu          : usr=98.54%, sys=1.06%, ctx=31, majf=0, minf=76
00:38:58.105    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:38:58.105       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.105       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.105       issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.105       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.105  filename0: (groupid=0, jobs=1): err= 0: pid=3349534: Tue Dec 10 00:20:12 2024
00:38:58.105    read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10003msec)
00:38:58.105      slat (usec): min=5, max=123, avg=36.72, stdev=16.17
00:38:58.105      clat (usec): min=11694, max=42063, avg=26701.10, stdev=2135.65
00:38:58.105       lat (usec): min=11709, max=42079, avg=26737.82, stdev=2134.73
00:38:58.105      clat percentiles (usec):
00:38:58.105       |  1.00th=[24249],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.105       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870],
00:38:58.105       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.105       | 99.00th=[30802], 99.50th=[31327], 99.90th=[42206], 99.95th=[42206],
00:38:58.105       | 99.99th=[42206]
00:38:58.105     bw (  KiB/s): min= 2176, max= 2560, per=4.15%, avg=2363.84, stdev=130.72, samples=19
00:38:58.105     iops        : min=  544, max=  640, avg=590.84, stdev=32.70, samples=19
00:38:58.105    lat (msec)   : 20=0.27%, 50=99.73%
00:38:58.105    cpu          : usr=98.52%, sys=1.02%, ctx=55, majf=0, minf=66
00:38:58.105    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.105       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.105       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.105       issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.105       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.105  filename0: (groupid=0, jobs=1): err= 0: pid=3349535: Tue Dec 10 00:20:12 2024
00:38:58.105    read: IOPS=592, BW=2370KiB/s (2427kB/s)(23.2MiB/10017msec)
00:38:58.105      slat (usec): min=8, max=102, avg=38.68, stdev=16.64
00:38:58.105      clat (usec): min=16278, max=32738, avg=26681.48, stdev=1875.20
00:38:58.105       lat (usec): min=16289, max=32766, avg=26720.16, stdev=1877.65
00:38:58.105      clat percentiles (usec):
00:38:58.105       |  1.00th=[23462],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.105       | 30.00th=[25297], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870],
00:38:58.105       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.105       | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327],
00:38:58.105       | 99.99th=[32637]
00:38:58.105     bw (  KiB/s): min= 2176, max= 2560, per=4.16%, avg=2370.05, stdev=123.92, samples=19
00:38:58.105     iops        : min=  544, max=  640, avg=592.32, stdev=31.06, samples=19
00:38:58.105    lat (msec)   : 20=0.35%, 50=99.65%
00:38:58.105    cpu          : usr=98.56%, sys=1.00%, ctx=68, majf=0, minf=75
00:38:58.105    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:38:58.105       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.105       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.105       issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.105       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.105  filename0: (groupid=0, jobs=1): err= 0: pid=3349536: Tue Dec 10 00:20:12 2024
00:38:58.105    read: IOPS=591, BW=2368KiB/s (2424kB/s)(23.1MiB/10002msec)
00:38:58.105      slat (nsec): min=6929, max=89880, avg=37247.50, stdev=18596.71
00:38:58.105      clat (usec): min=23045, max=31442, avg=26696.87, stdev=1823.16
00:38:58.105       lat (usec): min=23060, max=31490, avg=26734.11, stdev=1824.58
00:38:58.105      clat percentiles (usec):
00:38:58.105       |  1.00th=[24249],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.105       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870],
00:38:58.105       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.105       | 99.00th=[30802], 99.50th=[30802], 99.90th=[31327], 99.95th=[31327],
00:38:58.105       | 99.99th=[31327]
00:38:58.105     bw (  KiB/s): min= 2176, max= 2560, per=4.15%, avg=2363.32, stdev=130.71, samples=19
00:38:58.105     iops        : min=  544, max=  640, avg=590.63, stdev=32.70, samples=19
00:38:58.105    lat (msec)   : 50=100.00%
00:38:58.105    cpu          : usr=98.80%, sys=0.78%, ctx=27, majf=0, minf=75
00:38:58.105    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.105       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.105       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.105       issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.105       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.105  filename0: (groupid=0, jobs=1): err= 0: pid=3349537: Tue Dec 10 00:20:12 2024
00:38:58.105    read: IOPS=591, BW=2368KiB/s (2424kB/s)(23.1MiB/10002msec)
00:38:58.105      slat (nsec): min=7625, max=81798, avg=35571.91, stdev=17080.53
00:38:58.105      clat (usec): min=14548, max=32828, avg=26739.65, stdev=1825.86
00:38:58.105       lat (usec): min=14559, max=32841, avg=26775.22, stdev=1826.95
00:38:58.105      clat percentiles (usec):
00:38:58.105       |  1.00th=[24249],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.105       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870],
00:38:58.105       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29754], 95.00th=[30278],
00:38:58.105       | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327],
00:38:58.106       | 99.99th=[32900]
00:38:58.106     bw (  KiB/s): min= 2176, max= 2560, per=4.15%, avg=2363.32, stdev=130.71, samples=19
00:38:58.106     iops        : min=  544, max=  640, avg=590.63, stdev=32.70, samples=19
00:38:58.106    lat (msec)   : 20=0.03%, 50=99.97%
00:38:58.106    cpu          : usr=98.11%, sys=1.11%, ctx=296, majf=0, minf=69
00:38:58.106    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:38:58.106       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.106       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.106       issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.106       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.106  filename0: (groupid=0, jobs=1): err= 0: pid=3349538: Tue Dec 10 00:20:12 2024
00:38:58.106    read: IOPS=592, BW=2370KiB/s (2427kB/s)(23.2MiB/10017msec)
00:38:58.106      slat (nsec): min=6427, max=99275, avg=40453.72, stdev=17882.21
00:38:58.106      clat (usec): min=18775, max=31362, avg=26640.00, stdev=1871.33
00:38:58.106       lat (usec): min=18790, max=31425, avg=26680.45, stdev=1875.16
00:38:58.106      clat percentiles (usec):
00:38:58.106       |  1.00th=[23462],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.106       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26870],
00:38:58.106       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30016],
00:38:58.106       | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327],
00:38:58.106       | 99.99th=[31327]
00:38:58.106     bw (  KiB/s): min= 2176, max= 2560, per=4.16%, avg=2370.05, stdev=123.92, samples=19
00:38:58.106     iops        : min=  544, max=  640, avg=592.32, stdev=31.06, samples=19
00:38:58.106    lat (msec)   : 20=0.27%, 50=99.73%
00:38:58.106    cpu          : usr=98.73%, sys=0.85%, ctx=52, majf=0, minf=69
00:38:58.106    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.106       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.106       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.106       issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.106       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.106  filename0: (groupid=0, jobs=1): err= 0: pid=3349539: Tue Dec 10 00:20:12 2024
00:38:58.106    read: IOPS=594, BW=2377KiB/s (2434kB/s)(23.2MiB/10016msec)
00:38:58.106      slat (usec): min=7, max=100, avg=29.46, stdev=19.08
00:38:58.106      clat (usec): min=9482, max=31572, avg=26693.15, stdev=2144.52
00:38:58.106       lat (usec): min=9499, max=31591, avg=26722.61, stdev=2145.62
00:38:58.106      clat percentiles (usec):
00:38:58.106       |  1.00th=[22152],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.106       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870],
00:38:58.106       | 70.00th=[27395], 80.00th=[28705], 90.00th=[29754], 95.00th=[30540],
00:38:58.106       | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31589],
00:38:58.106       | 99.99th=[31589]
00:38:58.106     bw (  KiB/s): min= 2171, max= 2560, per=4.17%, avg=2377.32, stdev=150.39, samples=19
00:38:58.106     iops        : min=  542, max=  640, avg=594.21, stdev=37.70, samples=19
00:38:58.106    lat (msec)   : 10=0.27%, 20=0.54%, 50=99.19%
00:38:58.106    cpu          : usr=98.28%, sys=1.08%, ctx=175, majf=0, minf=77
00:38:58.106    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.106       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.106       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.106       issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.106       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.106  filename1: (groupid=0, jobs=1): err= 0: pid=3349540: Tue Dec 10 00:20:12 2024
00:38:58.106    read: IOPS=594, BW=2378KiB/s (2435kB/s)(23.2MiB/10011msec)
00:38:58.106      slat (usec): min=6, max=103, avg=35.09, stdev=18.26
00:38:58.106      clat (usec): min=9570, max=31375, avg=26623.18, stdev=2170.48
00:38:58.106       lat (usec): min=9581, max=31431, avg=26658.27, stdev=2173.55
00:38:58.106      clat percentiles (usec):
00:38:58.106       |  1.00th=[19792],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.106       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870],
00:38:58.106       | 70.00th=[27395], 80.00th=[28705], 90.00th=[29492], 95.00th=[30278],
00:38:58.106       | 99.00th=[30802], 99.50th=[31065], 99.90th=[31065], 99.95th=[31327],
00:38:58.106       | 99.99th=[31327]
00:38:58.106     bw (  KiB/s): min= 2176, max= 2560, per=4.17%, avg=2377.21, stdev=142.84, samples=19
00:38:58.106     iops        : min=  544, max=  640, avg=594.21, stdev=35.66, samples=19
00:38:58.106    lat (msec)   : 10=0.27%, 20=0.81%, 50=98.92%
00:38:58.106    cpu          : usr=98.66%, sys=0.87%, ctx=37, majf=0, minf=77
00:38:58.106    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.106       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.106       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.106       issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.106       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.106  filename1: (groupid=0, jobs=1): err= 0: pid=3349541: Tue Dec 10 00:20:12 2024
00:38:58.106    read: IOPS=594, BW=2378KiB/s (2435kB/s)(23.2MiB/10011msec)
00:38:58.106      slat (nsec): min=7530, max=81096, avg=23675.97, stdev=15084.80
00:38:58.106      clat (usec): min=9614, max=31493, avg=26722.26, stdev=2183.23
00:38:58.106       lat (usec): min=9631, max=31518, avg=26745.94, stdev=2184.31
00:38:58.106      clat percentiles (usec):
00:38:58.106       |  1.00th=[19792],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.106       | 30.00th=[25297], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870],
00:38:58.106       | 70.00th=[27395], 80.00th=[28705], 90.00th=[29754], 95.00th=[30278],
00:38:58.106       | 99.00th=[31065], 99.50th=[31327], 99.90th=[31327], 99.95th=[31589],
00:38:58.106       | 99.99th=[31589]
00:38:58.106     bw (  KiB/s): min= 2176, max= 2560, per=4.17%, avg=2377.21, stdev=142.84, samples=19
00:38:58.106     iops        : min=  544, max=  640, avg=594.21, stdev=35.66, samples=19
00:38:58.106    lat (msec)   : 10=0.24%, 20=0.84%, 50=98.92%
00:38:58.106    cpu          : usr=98.42%, sys=1.17%, ctx=22, majf=0, minf=85
00:38:58.106    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:38:58.106       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.106       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.106       issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.106       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.106  filename1: (groupid=0, jobs=1): err= 0: pid=3349542: Tue Dec 10 00:20:12 2024
00:38:58.106    read: IOPS=591, BW=2368KiB/s (2424kB/s)(23.1MiB/10002msec)
00:38:58.106      slat (nsec): min=6628, max=90374, avg=38485.25, stdev=17547.21
00:38:58.106      clat (usec): min=11668, max=41147, avg=26674.42, stdev=2098.69
00:38:58.106       lat (usec): min=11678, max=41165, avg=26712.90, stdev=2099.38
00:38:58.106      clat percentiles (usec):
00:38:58.106       |  1.00th=[24249],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.106       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26870],
00:38:58.106       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.106       | 99.00th=[30802], 99.50th=[31065], 99.90th=[41157], 99.95th=[41157],
00:38:58.106       | 99.99th=[41157]
00:38:58.106     bw (  KiB/s): min= 2176, max= 2560, per=4.15%, avg=2364.05, stdev=130.41, samples=19
00:38:58.106     iops        : min=  544, max=  640, avg=590.89, stdev=32.62, samples=19
00:38:58.106    lat (msec)   : 20=0.27%, 50=99.73%
00:38:58.106    cpu          : usr=98.32%, sys=1.13%, ctx=61, majf=0, minf=61
00:38:58.106    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.106       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.106       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.106       issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.106       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.106  filename1: (groupid=0, jobs=1): err= 0: pid=3349543: Tue Dec 10 00:20:12 2024
00:38:58.106    read: IOPS=591, BW=2368KiB/s (2424kB/s)(23.1MiB/10002msec)
00:38:58.106      slat (usec): min=6, max=102, avg=40.17, stdev=16.83
00:38:58.106      clat (usec): min=6341, max=51179, avg=26671.45, stdev=2452.27
00:38:58.107       lat (usec): min=6349, max=51198, avg=26711.61, stdev=2454.47
00:38:58.107      clat percentiles (usec):
00:38:58.107       |  1.00th=[23725],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.107       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26870],
00:38:58.107       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.107       | 99.00th=[30802], 99.50th=[31065], 99.90th=[51119], 99.95th=[51119],
00:38:58.107       | 99.99th=[51119]
00:38:58.107     bw (  KiB/s): min= 2176, max= 2560, per=4.15%, avg=2364.05, stdev=137.21, samples=19
00:38:58.107     iops        : min=  544, max=  640, avg=590.89, stdev=34.32, samples=19
00:38:58.107    lat (msec)   : 10=0.27%, 20=0.10%, 50=99.36%, 100=0.27%
00:38:58.107    cpu          : usr=98.07%, sys=1.22%, ctx=114, majf=0, minf=80
00:38:58.107    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.107       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.107       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.107       issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.107       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.107  filename1: (groupid=0, jobs=1): err= 0: pid=3349544: Tue Dec 10 00:20:12 2024
00:38:58.107    read: IOPS=592, BW=2370KiB/s (2427kB/s)(23.2MiB/10017msec)
00:38:58.107      slat (nsec): min=8592, max=81760, avg=37478.98, stdev=15562.08
00:38:58.107      clat (usec): min=18814, max=31389, avg=26685.59, stdev=1871.59
00:38:58.107       lat (usec): min=18822, max=31404, avg=26723.07, stdev=1873.71
00:38:58.107      clat percentiles (usec):
00:38:58.107       |  1.00th=[23462],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.107       | 30.00th=[25297], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870],
00:38:58.107       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.107       | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327],
00:38:58.107       | 99.99th=[31327]
00:38:58.107     bw (  KiB/s): min= 2176, max= 2560, per=4.16%, avg=2370.26, stdev=123.80, samples=19
00:38:58.107     iops        : min=  544, max=  640, avg=592.37, stdev=31.03, samples=19
00:38:58.107    lat (msec)   : 20=0.37%, 50=99.63%
00:38:58.107    cpu          : usr=98.70%, sys=0.91%, ctx=20, majf=0, minf=56
00:38:58.107    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.107       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.107       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.107       issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.107       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.107  filename1: (groupid=0, jobs=1): err= 0: pid=3349545: Tue Dec 10 00:20:12 2024
00:38:58.107    read: IOPS=591, BW=2366KiB/s (2423kB/s)(23.1MiB/10007msec)
00:38:58.107      slat (nsec): min=4523, max=97420, avg=43800.24, stdev=16252.98
00:38:58.107      clat (usec): min=16689, max=39335, avg=26657.08, stdev=1977.80
00:38:58.107       lat (usec): min=16702, max=39348, avg=26700.88, stdev=1978.69
00:38:58.107      clat percentiles (usec):
00:38:58.107       |  1.00th=[23987],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.107       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608],
00:38:58.107       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.107       | 99.00th=[30802], 99.50th=[31065], 99.90th=[39060], 99.95th=[39060],
00:38:58.107       | 99.99th=[39584]
00:38:58.107     bw (  KiB/s): min= 2176, max= 2560, per=4.15%, avg=2364.05, stdev=137.21, samples=19
00:38:58.107     iops        : min=  544, max=  640, avg=590.89, stdev=34.32, samples=19
00:38:58.107    lat (msec)   : 20=0.27%, 50=99.73%
00:38:58.107    cpu          : usr=98.31%, sys=1.19%, ctx=99, majf=0, minf=56
00:38:58.107    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.107       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.107       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.107       issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.107       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.107  filename1: (groupid=0, jobs=1): err= 0: pid=3349546: Tue Dec 10 00:20:12 2024
00:38:58.107    read: IOPS=594, BW=2377KiB/s (2434kB/s)(23.2MiB/10016msec)
00:38:58.107      slat (nsec): min=8268, max=86216, avg=36663.48, stdev=17440.72
00:38:58.107      clat (usec): min=9490, max=31579, avg=26646.24, stdev=2129.83
00:38:58.107       lat (usec): min=9505, max=31601, avg=26682.90, stdev=2131.73
00:38:58.107      clat percentiles (usec):
00:38:58.107       |  1.00th=[22152],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.107       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26870],
00:38:58.107       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.107       | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31589],
00:38:58.107       | 99.99th=[31589]
00:38:58.107     bw (  KiB/s): min= 2171, max= 2560, per=4.17%, avg=2377.32, stdev=150.39, samples=19
00:38:58.107     iops        : min=  542, max=  640, avg=594.21, stdev=37.70, samples=19
00:38:58.107    lat (msec)   : 10=0.27%, 20=0.54%, 50=99.19%
00:38:58.107    cpu          : usr=97.94%, sys=1.31%, ctx=94, majf=0, minf=76
00:38:58.107    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.107       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.107       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.107       issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.107       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.107  filename1: (groupid=0, jobs=1): err= 0: pid=3349547: Tue Dec 10 00:20:12 2024
00:38:58.107    read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10003msec)
00:38:58.107      slat (nsec): min=10131, max=93041, avg=41326.51, stdev=17368.00
00:38:58.107      clat (usec): min=21975, max=31552, avg=26697.92, stdev=1824.89
00:38:58.107       lat (usec): min=21990, max=31592, avg=26739.24, stdev=1826.21
00:38:58.107      clat percentiles (usec):
00:38:58.107       |  1.00th=[23987],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.107       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26870],
00:38:58.107       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.107       | 99.00th=[30802], 99.50th=[31065], 99.90th=[31589], 99.95th=[31589],
00:38:58.107       | 99.99th=[31589]
00:38:58.107     bw (  KiB/s): min= 2176, max= 2560, per=4.15%, avg=2363.79, stdev=130.54, samples=19
00:38:58.107     iops        : min=  544, max=  640, avg=590.79, stdev=32.68, samples=19
00:38:58.107    lat (msec)   : 50=100.00%
00:38:58.107    cpu          : usr=98.66%, sys=0.92%, ctx=64, majf=0, minf=49
00:38:58.107    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.107       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.107       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.107       issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.107       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.107  filename2: (groupid=0, jobs=1): err= 0: pid=3349548: Tue Dec 10 00:20:12 2024
00:38:58.107    read: IOPS=594, BW=2377KiB/s (2434kB/s)(23.2MiB/10017msec)
00:38:58.107      slat (nsec): min=7026, max=96780, avg=26562.62, stdev=17294.64
00:38:58.107      clat (usec): min=9447, max=31617, avg=26718.69, stdev=2125.51
00:38:58.107       lat (usec): min=9465, max=31639, avg=26745.25, stdev=2127.23
00:38:58.107      clat percentiles (usec):
00:38:58.107       |  1.00th=[22152],  5.00th=[24511], 10.00th=[25035], 20.00th=[25035],
00:38:58.107       | 30.00th=[25297], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870],
00:38:58.107       | 70.00th=[27395], 80.00th=[28705], 90.00th=[29754], 95.00th=[30540],
00:38:58.107       | 99.00th=[31065], 99.50th=[31065], 99.90th=[31589], 99.95th=[31589],
00:38:58.107       | 99.99th=[31589]
00:38:58.107     bw (  KiB/s): min= 2176, max= 2560, per=4.17%, avg=2373.35, stdev=127.56, samples=20
00:38:58.107     iops        : min=  544, max=  640, avg=593.20, stdev=31.88, samples=20
00:38:58.107    lat (msec)   : 10=0.27%, 20=0.27%, 50=99.46%
00:38:58.107    cpu          : usr=98.38%, sys=1.23%, ctx=36, majf=0, minf=86
00:38:58.107    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.107       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.108       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.108       issued rwts: total=5952,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.108       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.108  filename2: (groupid=0, jobs=1): err= 0: pid=3349549: Tue Dec 10 00:20:12 2024
00:38:58.108    read: IOPS=592, BW=2370KiB/s (2427kB/s)(23.2MiB/10017msec)
00:38:58.108      slat (nsec): min=6365, max=97176, avg=39024.92, stdev=17915.44
00:38:58.108      clat (usec): min=18768, max=31439, avg=26665.94, stdev=1857.62
00:38:58.108       lat (usec): min=18795, max=31484, avg=26704.97, stdev=1861.46
00:38:58.108      clat percentiles (usec):
00:38:58.108       |  1.00th=[23462],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.108       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26870],
00:38:58.108       | 70.00th=[27395], 80.00th=[28705], 90.00th=[29492], 95.00th=[30016],
00:38:58.108       | 99.00th=[30802], 99.50th=[31065], 99.90th=[31327], 99.95th=[31327],
00:38:58.108       | 99.99th=[31327]
00:38:58.108     bw (  KiB/s): min= 2176, max= 2560, per=4.16%, avg=2370.05, stdev=123.92, samples=19
00:38:58.108     iops        : min=  544, max=  640, avg=592.32, stdev=31.06, samples=19
00:38:58.108    lat (msec)   : 20=0.35%, 50=99.65%
00:38:58.108    cpu          : usr=98.39%, sys=1.12%, ctx=76, majf=0, minf=76
00:38:58.108    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.108       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.108       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.108       issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.108       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.108  filename2: (groupid=0, jobs=1): err= 0: pid=3349550: Tue Dec 10 00:20:12 2024
00:38:58.108    read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10005msec)
00:38:58.108      slat (nsec): min=6754, max=98836, avg=42924.34, stdev=16738.99
00:38:58.108      clat (usec): min=16559, max=39014, avg=26682.44, stdev=1975.75
00:38:58.108       lat (usec): min=16574, max=39035, avg=26725.36, stdev=1976.42
00:38:58.108      clat percentiles (usec):
00:38:58.108       |  1.00th=[23987],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.108       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608],
00:38:58.108       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.108       | 99.00th=[30802], 99.50th=[31065], 99.90th=[39060], 99.95th=[39060],
00:38:58.108       | 99.99th=[39060]
00:38:58.108     bw (  KiB/s): min= 2176, max= 2560, per=4.15%, avg=2364.05, stdev=137.21, samples=19
00:38:58.108     iops        : min=  544, max=  640, avg=590.89, stdev=34.32, samples=19
00:38:58.108    lat (msec)   : 20=0.27%, 50=99.73%
00:38:58.108    cpu          : usr=98.29%, sys=1.19%, ctx=57, majf=0, minf=56
00:38:58.108    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.108       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.108       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.108       issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.108       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.108  filename2: (groupid=0, jobs=1): err= 0: pid=3349551: Tue Dec 10 00:20:12 2024
00:38:58.108    read: IOPS=591, BW=2367KiB/s (2423kB/s)(23.1MiB/10006msec)
00:38:58.108      slat (nsec): min=3997, max=99767, avg=43039.32, stdev=18082.34
00:38:58.108      clat (usec): min=16645, max=39511, avg=26640.43, stdev=1987.24
00:38:58.108       lat (usec): min=16660, max=39523, avg=26683.46, stdev=1989.31
00:38:58.108      clat percentiles (usec):
00:38:58.108       |  1.00th=[23987],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.108       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608],
00:38:58.108       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.108       | 99.00th=[30802], 99.50th=[31065], 99.90th=[39584], 99.95th=[39584],
00:38:58.108       | 99.99th=[39584]
00:38:58.108     bw (  KiB/s): min= 2171, max= 2560, per=4.15%, avg=2363.84, stdev=150.40, samples=19
00:38:58.108     iops        : min=  542, max=  640, avg=590.84, stdev=37.66, samples=19
00:38:58.108    lat (msec)   : 20=0.27%, 50=99.73%
00:38:58.108    cpu          : usr=98.81%, sys=0.76%, ctx=56, majf=0, minf=69
00:38:58.108    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.108       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.108       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.108       issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.108       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.108  filename2: (groupid=0, jobs=1): err= 0: pid=3349552: Tue Dec 10 00:20:12 2024
00:38:58.108    read: IOPS=627, BW=2509KiB/s (2569kB/s)(24.5MiB/10007msec)
00:38:58.108      slat (nsec): min=6148, max=95989, avg=15965.85, stdev=13107.27
00:38:58.108      clat (usec): min=9225, max=51156, avg=25444.25, stdev=4550.56
00:38:58.108       lat (usec): min=9233, max=51171, avg=25460.22, stdev=4549.85
00:38:58.108      clat percentiles (usec):
00:38:58.108       |  1.00th=[13698],  5.00th=[17433], 10.00th=[19792], 20.00th=[21627],
00:38:58.108       | 30.00th=[23725], 40.00th=[25035], 50.00th=[25297], 60.00th=[26608],
00:38:58.108       | 70.00th=[27132], 80.00th=[28967], 90.00th=[30540], 95.00th=[32637],
00:38:58.108       | 99.00th=[36963], 99.50th=[39060], 99.90th=[41681], 99.95th=[41681],
00:38:58.108       | 99.99th=[51119]
00:38:58.108     bw (  KiB/s): min= 2160, max= 2880, per=4.38%, avg=2497.05, stdev=169.27, samples=19
00:38:58.108     iops        : min=  540, max=  720, avg=624.16, stdev=42.31, samples=19
00:38:58.108    lat (msec)   : 10=0.10%, 20=10.52%, 50=89.36%, 100=0.03%
00:38:58.108    cpu          : usr=98.05%, sys=1.35%, ctx=73, majf=0, minf=107
00:38:58.108    IO depths    : 1=0.1%, 2=0.1%, 4=3.0%, 8=80.9%, 16=15.9%, 32=0.0%, >=64=0.0%
00:38:58.108       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.108       complete  : 0=0.0%, 4=89.0%, 8=8.8%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.108       issued rwts: total=6276,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.108       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.108  filename2: (groupid=0, jobs=1): err= 0: pid=3349553: Tue Dec 10 00:20:12 2024
00:38:58.108    read: IOPS=591, BW=2367KiB/s (2424kB/s)(23.1MiB/10005msec)
00:38:58.108      slat (nsec): min=6845, max=97676, avg=43835.82, stdev=16461.89
00:38:58.108      clat (usec): min=16604, max=39075, avg=26661.93, stdev=1986.31
00:38:58.108       lat (usec): min=16618, max=39091, avg=26705.77, stdev=1987.70
00:38:58.108      clat percentiles (usec):
00:38:58.108       |  1.00th=[23987],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.108       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608],
00:38:58.108       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.108       | 99.00th=[30802], 99.50th=[31065], 99.90th=[39060], 99.95th=[39060],
00:38:58.108       | 99.99th=[39060]
00:38:58.108     bw (  KiB/s): min= 2176, max= 2560, per=4.15%, avg=2364.05, stdev=137.21, samples=19
00:38:58.108     iops        : min=  544, max=  640, avg=590.89, stdev=34.32, samples=19
00:38:58.108    lat (msec)   : 20=0.30%, 50=99.70%
00:38:58.108    cpu          : usr=98.65%, sys=0.82%, ctx=72, majf=0, minf=57
00:38:58.108    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0%
00:38:58.108       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.108       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.108       issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.108       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.108  filename2: (groupid=0, jobs=1): err= 0: pid=3349554: Tue Dec 10 00:20:12 2024
00:38:58.108    read: IOPS=592, BW=2370KiB/s (2427kB/s)(23.2MiB/10020msec)
00:38:58.108      slat (usec): min=7, max=104, avg=40.43, stdev=16.81
00:38:58.108      clat (usec): min=17623, max=38764, avg=26658.53, stdev=1893.93
00:38:58.108       lat (usec): min=17634, max=38792, avg=26698.96, stdev=1896.85
00:38:58.108      clat percentiles (usec):
00:38:58.109       |  1.00th=[23462],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.109       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26870],
00:38:58.109       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29492], 95.00th=[30278],
00:38:58.109       | 99.00th=[30802], 99.50th=[31065], 99.90th=[32113], 99.95th=[32113],
00:38:58.109       | 99.99th=[38536]
00:38:58.109     bw (  KiB/s): min= 2171, max= 2560, per=4.16%, avg=2370.26, stdev=137.61, samples=19
00:38:58.109     iops        : min=  542, max=  640, avg=592.42, stdev=34.47, samples=19
00:38:58.109    lat (msec)   : 20=0.35%, 50=99.65%
00:38:58.109    cpu          : usr=98.25%, sys=1.15%, ctx=58, majf=0, minf=74
00:38:58.109    IO depths    : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0%
00:38:58.109       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.109       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.109       issued rwts: total=5936,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.109       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.109  filename2: (groupid=0, jobs=1): err= 0: pid=3349555: Tue Dec 10 00:20:12 2024
00:38:58.109    read: IOPS=591, BW=2368KiB/s (2424kB/s)(23.1MiB/10002msec)
00:38:58.109      slat (nsec): min=7888, max=97217, avg=34065.56, stdev=18609.35
00:38:58.109      clat (usec): min=20937, max=31630, avg=26762.05, stdev=1855.41
00:38:58.109       lat (usec): min=21007, max=31672, avg=26796.11, stdev=1854.55
00:38:58.109      clat percentiles (usec):
00:38:58.109       |  1.00th=[23987],  5.00th=[24511], 10.00th=[24773], 20.00th=[25035],
00:38:58.109       | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870],
00:38:58.109       | 70.00th=[27395], 80.00th=[28443], 90.00th=[29754], 95.00th=[30540],
00:38:58.109       | 99.00th=[31065], 99.50th=[31065], 99.90th=[31589], 99.95th=[31589],
00:38:58.109       | 99.99th=[31589]
00:38:58.109     bw (  KiB/s): min= 2176, max= 2560, per=4.15%, avg=2363.79, stdev=130.54, samples=19
00:38:58.109     iops        : min=  544, max=  640, avg=590.79, stdev=32.68, samples=19
00:38:58.109    lat (msec)   : 50=100.00%
00:38:58.109    cpu          : usr=98.39%, sys=1.12%, ctx=64, majf=0, minf=77
00:38:58.109    IO depths    : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0%
00:38:58.109       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.109       complete  : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0%
00:38:58.109       issued rwts: total=5920,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:38:58.109       latency   : target=0, window=0, percentile=100.00%, depth=16
00:38:58.109  
00:38:58.109  Run status group 0 (all jobs):
00:38:58.109     READ: bw=55.6MiB/s (58.3MB/s), 2366KiB/s-2509KiB/s (2423kB/s-2569kB/s), io=557MiB (584MB), run=10002-10020msec
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.109  bdev_null0
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:38:58.109   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.110  [2024-12-10 00:20:12.598144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@"
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.110  bdev_null1
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=()
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:38:58.110  {
00:38:58.110    "params": {
00:38:58.110      "name": "Nvme$subsystem",
00:38:58.110      "trtype": "$TEST_TRANSPORT",
00:38:58.110      "traddr": "$NVMF_FIRST_TARGET_IP",
00:38:58.110      "adrfam": "ipv4",
00:38:58.110      "trsvcid": "$NVMF_PORT",
00:38:58.110      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:38:58.110      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:38:58.110      "hdgst": ${hdgst:-false},
00:38:58.110      "ddgst": ${ddgst:-false}
00:38:58.110    },
00:38:58.110    "method": "bdev_nvme_attach_controller"
00:38:58.110  }
00:38:58.110  EOF
00:38:58.110  )")
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat
00:38:58.110     00:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib=
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 ))
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:38:58.110  {
00:38:58.110    "params": {
00:38:58.110      "name": "Nvme$subsystem",
00:38:58.110      "trtype": "$TEST_TRANSPORT",
00:38:58.110      "traddr": "$NVMF_FIRST_TARGET_IP",
00:38:58.110      "adrfam": "ipv4",
00:38:58.110      "trsvcid": "$NVMF_PORT",
00:38:58.110      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:38:58.110      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:38:58.110      "hdgst": ${hdgst:-false},
00:38:58.110      "ddgst": ${ddgst:-false}
00:38:58.110    },
00:38:58.110    "method": "bdev_nvme_attach_controller"
00:38:58.110  }
00:38:58.110  EOF
00:38:58.110  )")
00:38:58.110     00:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ ))
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files ))
00:38:58.110    00:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq .
00:38:58.110     00:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=,
00:38:58.110     00:20:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:38:58.110    "params": {
00:38:58.110      "name": "Nvme0",
00:38:58.110      "trtype": "tcp",
00:38:58.110      "traddr": "10.0.0.2",
00:38:58.110      "adrfam": "ipv4",
00:38:58.110      "trsvcid": "4420",
00:38:58.110      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:38:58.110      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:38:58.110      "hdgst": false,
00:38:58.110      "ddgst": false
00:38:58.110    },
00:38:58.110    "method": "bdev_nvme_attach_controller"
00:38:58.110  },{
00:38:58.110    "params": {
00:38:58.110      "name": "Nvme1",
00:38:58.110      "trtype": "tcp",
00:38:58.110      "traddr": "10.0.0.2",
00:38:58.110      "adrfam": "ipv4",
00:38:58.110      "trsvcid": "4420",
00:38:58.110      "subnqn": "nqn.2016-06.io.spdk:cnode1",
00:38:58.110      "hostnqn": "nqn.2016-06.io.spdk:host1",
00:38:58.110      "hdgst": false,
00:38:58.110      "ddgst": false
00:38:58.110    },
00:38:58.110    "method": "bdev_nvme_attach_controller"
00:38:58.110  }'
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:38:58.110   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:38:58.111   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:38:58.111    00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:38:58.111    00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:38:58.111    00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:38:58.111   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=
00:38:58.111   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:38:58.111   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev'
00:38:58.111   00:20:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:38:58.111  filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8
00:38:58.111  ...
00:38:58.111  filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8
00:38:58.111  ...
00:38:58.111  fio-3.35
00:38:58.111  Starting 4 threads
00:39:03.380  
00:39:03.380  filename0: (groupid=0, jobs=1): err= 0: pid=3351450: Tue Dec 10 00:20:18 2024
00:39:03.380    read: IOPS=2844, BW=22.2MiB/s (23.3MB/s)(111MiB/5003msec)
00:39:03.380      slat (nsec): min=6064, max=57686, avg=8700.21, stdev=3002.27
00:39:03.380      clat (usec): min=761, max=5295, avg=2785.04, stdev=406.32
00:39:03.380       lat (usec): min=770, max=5306, avg=2793.74, stdev=406.07
00:39:03.380      clat percentiles (usec):
00:39:03.380       |  1.00th=[ 1647],  5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2474],
00:39:03.380       | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2900], 60.00th=[ 2966],
00:39:03.380       | 70.00th=[ 2999], 80.00th=[ 2999], 90.00th=[ 3163], 95.00th=[ 3326],
00:39:03.380       | 99.00th=[ 3916], 99.50th=[ 4080], 99.90th=[ 5014], 99.95th=[ 5014],
00:39:03.380       | 99.99th=[ 5211]
00:39:03.380     bw (  KiB/s): min=21344, max=23824, per=26.58%, avg=22561.78, stdev=846.05, samples=9
00:39:03.380     iops        : min= 2668, max= 2978, avg=2820.22, stdev=105.76, samples=9
00:39:03.380    lat (usec)   : 1000=0.27%
00:39:03.380    lat (msec)   : 2=2.31%, 4=96.68%, 10=0.74%
00:39:03.380    cpu          : usr=95.90%, sys=3.80%, ctx=9, majf=0, minf=9
00:39:03.380    IO depths    : 1=0.3%, 2=7.0%, 4=64.6%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0%
00:39:03.380       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:03.380       complete  : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:03.380       issued rwts: total=14233,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:39:03.380       latency   : target=0, window=0, percentile=100.00%, depth=8
00:39:03.380  filename0: (groupid=0, jobs=1): err= 0: pid=3351451: Tue Dec 10 00:20:18 2024
00:39:03.380    read: IOPS=2606, BW=20.4MiB/s (21.4MB/s)(102MiB/5001msec)
00:39:03.380      slat (nsec): min=6087, max=45192, avg=8634.47, stdev=3041.59
00:39:03.380      clat (usec): min=659, max=5569, avg=3043.81, stdev=451.03
00:39:03.380       lat (usec): min=670, max=5582, avg=3052.45, stdev=450.85
00:39:03.380      clat percentiles (usec):
00:39:03.380       |  1.00th=[ 2057],  5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2802],
00:39:03.380       | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999],
00:39:03.380       | 70.00th=[ 3064], 80.00th=[ 3261], 90.00th=[ 3589], 95.00th=[ 3884],
00:39:03.380       | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5342],
00:39:03.380       | 99.99th=[ 5538]
00:39:03.380     bw (  KiB/s): min=19968, max=21600, per=24.56%, avg=20853.33, stdev=535.04, samples=9
00:39:03.380     iops        : min= 2496, max= 2700, avg=2606.67, stdev=66.88, samples=9
00:39:03.380    lat (usec)   : 750=0.01%, 1000=0.02%
00:39:03.380    lat (msec)   : 2=0.71%, 4=94.85%, 10=4.41%
00:39:03.380    cpu          : usr=95.86%, sys=3.80%, ctx=8, majf=0, minf=9
00:39:03.380    IO depths    : 1=0.1%, 2=2.7%, 4=69.3%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0%
00:39:03.380       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:03.380       complete  : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:03.380       issued rwts: total=13036,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:39:03.380       latency   : target=0, window=0, percentile=100.00%, depth=8
00:39:03.380  filename1: (groupid=0, jobs=1): err= 0: pid=3351452: Tue Dec 10 00:20:18 2024
00:39:03.380    read: IOPS=2538, BW=19.8MiB/s (20.8MB/s)(99.2MiB/5001msec)
00:39:03.380      slat (nsec): min=6075, max=59247, avg=8425.06, stdev=3015.20
00:39:03.380      clat (usec): min=662, max=5499, avg=3126.42, stdev=412.04
00:39:03.380       lat (usec): min=672, max=5510, avg=3134.84, stdev=411.90
00:39:03.380      clat percentiles (usec):
00:39:03.380       |  1.00th=[ 2278],  5.00th=[ 2671], 10.00th=[ 2802], 20.00th=[ 2933],
00:39:03.380       | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032],
00:39:03.380       | 70.00th=[ 3195], 80.00th=[ 3294], 90.00th=[ 3621], 95.00th=[ 3884],
00:39:03.380       | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5276], 99.95th=[ 5342],
00:39:03.380       | 99.99th=[ 5473]
00:39:03.380     bw (  KiB/s): min=19936, max=21264, per=24.04%, avg=20408.89, stdev=489.91, samples=9
00:39:03.380     iops        : min= 2492, max= 2658, avg=2551.11, stdev=61.24, samples=9
00:39:03.380    lat (usec)   : 750=0.02%, 1000=0.03%
00:39:03.380    lat (msec)   : 2=0.20%, 4=95.53%, 10=4.22%
00:39:03.380    cpu          : usr=95.82%, sys=3.86%, ctx=10, majf=0, minf=9
00:39:03.380    IO depths    : 1=0.1%, 2=1.5%, 4=70.6%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0%
00:39:03.380       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:03.380       complete  : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:03.380       issued rwts: total=12697,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:39:03.380       latency   : target=0, window=0, percentile=100.00%, depth=8
00:39:03.380  filename1: (groupid=0, jobs=1): err= 0: pid=3351453: Tue Dec 10 00:20:18 2024
00:39:03.381    read: IOPS=2624, BW=20.5MiB/s (21.5MB/s)(103MiB/5001msec)
00:39:03.381      slat (nsec): min=6072, max=53711, avg=8484.67, stdev=3006.19
00:39:03.381      clat (usec): min=1122, max=5568, avg=3024.29, stdev=400.40
00:39:03.381       lat (usec): min=1129, max=5575, avg=3032.78, stdev=400.30
00:39:03.381      clat percentiles (usec):
00:39:03.381       |  1.00th=[ 2114],  5.00th=[ 2442], 10.00th=[ 2573], 20.00th=[ 2802],
00:39:03.381       | 30.00th=[ 2933], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999],
00:39:03.381       | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3490], 95.00th=[ 3720],
00:39:03.381       | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 5080], 99.95th=[ 5276],
00:39:03.381       | 99.99th=[ 5538]
00:39:03.381     bw (  KiB/s): min=20272, max=21696, per=24.85%, avg=21094.44, stdev=421.51, samples=9
00:39:03.381     iops        : min= 2534, max= 2712, avg=2636.78, stdev=52.68, samples=9
00:39:03.381    lat (msec)   : 2=0.56%, 4=96.74%, 10=2.70%
00:39:03.381    cpu          : usr=96.06%, sys=3.64%, ctx=8, majf=0, minf=9
00:39:03.381    IO depths    : 1=0.2%, 2=2.0%, 4=69.6%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0%
00:39:03.381       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:03.381       complete  : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:03.381       issued rwts: total=13124,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:39:03.381       latency   : target=0, window=0, percentile=100.00%, depth=8
00:39:03.381  
00:39:03.381  Run status group 0 (all jobs):
00:39:03.381     READ: bw=82.9MiB/s (86.9MB/s), 19.8MiB/s-22.2MiB/s (20.8MB/s-23.3MB/s), io=415MiB (435MB), run=5001-5003msec
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@"
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:03.381  
00:39:03.381  real	0m24.204s
00:39:03.381  user	4m51.603s
00:39:03.381  sys	0m5.119s
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable
00:39:03.381   00:20:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x
00:39:03.381  ************************************
00:39:03.381  END TEST fio_dif_rand_params
00:39:03.381  ************************************
00:39:03.381   00:20:18 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest
00:39:03.381   00:20:18 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:39:03.381   00:20:18 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable
00:39:03.381   00:20:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:39:03.381  ************************************
00:39:03.381  START TEST fio_dif_digest
00:39:03.381  ************************************
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@"
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:39:03.381  bdev_null0
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:39:03.381  [2024-12-10 00:20:18.983526] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62
00:39:03.381    00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0
00:39:03.381    00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0
00:39:03.381    00:20:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=()
00:39:03.381   00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:39:03.381    00:20:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config
00:39:03.382   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:39:03.382    00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf
00:39:03.382    00:20:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}"
00:39:03.382    00:20:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF
00:39:03.382  {
00:39:03.382    "params": {
00:39:03.382      "name": "Nvme$subsystem",
00:39:03.382      "trtype": "$TEST_TRANSPORT",
00:39:03.382      "traddr": "$NVMF_FIRST_TARGET_IP",
00:39:03.382      "adrfam": "ipv4",
00:39:03.382      "trsvcid": "$NVMF_PORT",
00:39:03.382      "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem",
00:39:03.382      "hostnqn": "nqn.2016-06.io.spdk:host$subsystem",
00:39:03.382      "hdgst": ${hdgst:-false},
00:39:03.382      "ddgst": ${ddgst:-false}
00:39:03.382    },
00:39:03.382    "method": "bdev_nvme_attach_controller"
00:39:03.382  }
00:39:03.382  EOF
00:39:03.382  )")
00:39:03.382    00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file
00:39:03.382   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:39:03.382    00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat
00:39:03.382   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:39:03.382   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers
00:39:03.382   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:39:03.382   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift
00:39:03.382   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib=
00:39:03.382   00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:39:03.382     00:20:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat
00:39:03.382    00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 ))
00:39:03.382    00:20:18 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files ))
00:39:03.382    00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:39:03.382    00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan
00:39:03.382    00:20:18 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:39:03.382    00:20:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq .
00:39:03.382     00:20:18 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=,
00:39:03.382     00:20:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{
00:39:03.382    "params": {
00:39:03.382      "name": "Nvme0",
00:39:03.382      "trtype": "tcp",
00:39:03.382      "traddr": "10.0.0.2",
00:39:03.382      "adrfam": "ipv4",
00:39:03.382      "trsvcid": "4420",
00:39:03.382      "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:39:03.382      "hostnqn": "nqn.2016-06.io.spdk:host0",
00:39:03.382      "hdgst": true,
00:39:03.382      "ddgst": true
00:39:03.382    },
00:39:03.382    "method": "bdev_nvme_attach_controller"
00:39:03.382  }'
00:39:03.382   00:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=
00:39:03.382   00:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:39:03.382   00:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:39:03.382    00:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev
00:39:03.382    00:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:39:03.382    00:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:39:03.382   00:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=
00:39:03.382   00:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:39:03.382   00:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev'
00:39:03.382   00:20:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61
00:39:03.640  filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3
00:39:03.640  ...
00:39:03.640  fio-3.35
00:39:03.640  Starting 3 threads
00:39:15.856  
00:39:15.856  filename0: (groupid=0, jobs=1): err= 0: pid=3352596: Tue Dec 10 00:20:29 2024
00:39:15.856    read: IOPS=294, BW=36.9MiB/s (38.7MB/s)(371MiB/10048msec)
00:39:15.856      slat (nsec): min=6336, max=39725, avg=11714.21, stdev=2555.78
00:39:15.856      clat (usec): min=7953, max=54100, avg=10141.42, stdev=1330.62
00:39:15.856       lat (usec): min=7965, max=54107, avg=10153.14, stdev=1330.53
00:39:15.856      clat percentiles (usec):
00:39:15.856       |  1.00th=[ 8455],  5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503],
00:39:15.856       | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290],
00:39:15.856       | 70.00th=[10421], 80.00th=[10683], 90.00th=[11207], 95.00th=[11469],
00:39:15.856       | 99.00th=[12256], 99.50th=[12649], 99.90th=[14222], 99.95th=[47973],
00:39:15.856       | 99.99th=[54264]
00:39:15.856     bw (  KiB/s): min=36096, max=39424, per=35.90%, avg=37913.60, stdev=1030.38, samples=20
00:39:15.856     iops        : min=  282, max=  308, avg=296.20, stdev= 8.05, samples=20
00:39:15.856    lat (msec)   : 10=45.92%, 20=54.01%, 50=0.03%, 100=0.03%
00:39:15.856    cpu          : usr=95.83%, sys=3.85%, ctx=18, majf=0, minf=0
00:39:15.856    IO depths    : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:39:15.856       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:15.856       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:15.856       issued rwts: total=2964,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:39:15.856       latency   : target=0, window=0, percentile=100.00%, depth=3
00:39:15.856  filename0: (groupid=0, jobs=1): err= 0: pid=3352597: Tue Dec 10 00:20:29 2024
00:39:15.856    read: IOPS=261, BW=32.7MiB/s (34.3MB/s)(329MiB/10044msec)
00:39:15.856      slat (nsec): min=6356, max=30729, avg=11919.85, stdev=2547.87
00:39:15.856      clat (usec): min=8775, max=50060, avg=11431.74, stdev=1382.14
00:39:15.856       lat (usec): min=8787, max=50068, avg=11443.66, stdev=1382.12
00:39:15.856      clat percentiles (usec):
00:39:15.856       |  1.00th=[ 9503],  5.00th=[10028], 10.00th=[10290], 20.00th=[10683],
00:39:15.856       | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600],
00:39:15.856       | 70.00th=[11863], 80.00th=[12125], 90.00th=[12649], 95.00th=[13042],
00:39:15.856       | 99.00th=[13960], 99.50th=[14222], 99.90th=[15139], 99.95th=[46924],
00:39:15.856       | 99.99th=[50070]
00:39:15.856     bw (  KiB/s): min=32000, max=35328, per=31.84%, avg=33625.60, stdev=1083.81, samples=20
00:39:15.856     iops        : min=  250, max=  276, avg=262.70, stdev= 8.47, samples=20
00:39:15.856    lat (msec)   : 10=4.49%, 20=95.44%, 50=0.04%, 100=0.04%
00:39:15.856    cpu          : usr=95.90%, sys=3.77%, ctx=18, majf=0, minf=9
00:39:15.856    IO depths    : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:39:15.856       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:15.856       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:15.856       issued rwts: total=2629,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:39:15.856       latency   : target=0, window=0, percentile=100.00%, depth=3
00:39:15.856  filename0: (groupid=0, jobs=1): err= 0: pid=3352598: Tue Dec 10 00:20:29 2024
00:39:15.856    read: IOPS=268, BW=33.6MiB/s (35.2MB/s)(337MiB/10044msec)
00:39:15.856      slat (nsec): min=6378, max=34174, avg=12458.62, stdev=3350.11
00:39:15.856      clat (usec): min=8344, max=50439, avg=11138.74, stdev=1362.08
00:39:15.856       lat (usec): min=8357, max=50450, avg=11151.19, stdev=1362.00
00:39:15.856      clat percentiles (usec):
00:39:15.856       |  1.00th=[ 9241],  5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421],
00:39:15.856       | 30.00th=[10683], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207],
00:39:15.856       | 70.00th=[11469], 80.00th=[11863], 90.00th=[12256], 95.00th=[12649],
00:39:15.856       | 99.00th=[13566], 99.50th=[13829], 99.90th=[15401], 99.95th=[47449],
00:39:15.856       | 99.99th=[50594]
00:39:15.856     bw (  KiB/s): min=33024, max=35584, per=32.67%, avg=34508.80, stdev=974.29, samples=20
00:39:15.856     iops        : min=  258, max=  278, avg=269.60, stdev= 7.61, samples=20
00:39:15.856    lat (msec)   : 10=9.01%, 20=90.92%, 50=0.04%, 100=0.04%
00:39:15.856    cpu          : usr=96.03%, sys=3.65%, ctx=17, majf=0, minf=1
00:39:15.856    IO depths    : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
00:39:15.856       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:15.856       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:39:15.856       issued rwts: total=2698,0,0,0 short=0,0,0,0 dropped=0,0,0,0
00:39:15.856       latency   : target=0, window=0, percentile=100.00%, depth=3
00:39:15.856  
00:39:15.856  Run status group 0 (all jobs):
00:39:15.856     READ: bw=103MiB/s (108MB/s), 32.7MiB/s-36.9MiB/s (34.3MB/s-38.7MB/s), io=1036MiB (1087MB), run=10044-10048msec
00:39:15.856   00:20:30 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0
00:39:15.856   00:20:30 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub
00:39:15.856   00:20:30 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@"
00:39:15.856   00:20:30 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0
00:39:15.856   00:20:30 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0
00:39:15.856   00:20:30 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0
00:39:15.856   00:20:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:15.857   00:20:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:39:15.857   00:20:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:15.857   00:20:30 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0
00:39:15.857   00:20:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:15.857   00:20:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:39:15.857   00:20:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:15.857  
00:39:15.857  real	0m11.080s
00:39:15.857  user	0m35.729s
00:39:15.857  sys	0m1.413s
00:39:15.857   00:20:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable
00:39:15.857   00:20:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x
00:39:15.857  ************************************
00:39:15.857  END TEST fio_dif_digest
00:39:15.857  ************************************
00:39:15.857   00:20:30 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT
00:39:15.857   00:20:30 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini
00:39:15.857   00:20:30 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup
00:39:15.857   00:20:30 nvmf_dif -- nvmf/common.sh@121 -- # sync
00:39:15.857   00:20:30 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:39:15.857   00:20:30 nvmf_dif -- nvmf/common.sh@124 -- # set +e
00:39:15.857   00:20:30 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20}
00:39:15.857   00:20:30 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:39:15.857  rmmod nvme_tcp
00:39:15.857  rmmod nvme_fabrics
00:39:15.857  rmmod nvme_keyring
00:39:15.857   00:20:30 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:39:15.857   00:20:30 nvmf_dif -- nvmf/common.sh@128 -- # set -e
00:39:15.857   00:20:30 nvmf_dif -- nvmf/common.sh@129 -- # return 0
00:39:15.857   00:20:30 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3344307 ']'
00:39:15.857   00:20:30 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3344307
00:39:15.857   00:20:30 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3344307 ']'
00:39:15.857   00:20:30 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3344307
00:39:15.857    00:20:30 nvmf_dif -- common/autotest_common.sh@959 -- # uname
00:39:15.857   00:20:30 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:39:15.857    00:20:30 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3344307
00:39:15.857   00:20:30 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:39:15.857   00:20:30 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:39:15.857   00:20:30 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3344307'
00:39:15.857  killing process with pid 3344307
00:39:15.857   00:20:30 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3344307
00:39:15.857   00:20:30 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3344307
00:39:15.857   00:20:30 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']'
00:39:15.857   00:20:30 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:39:17.236  Waiting for block devices as requested
00:39:17.236  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:39:17.496  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:39:17.496  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:39:17.496  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:39:17.755  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:39:17.755  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:39:17.755  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:39:18.023  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:39:18.023  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:39:18.023  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:39:18.023  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:39:18.284  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:39:18.284  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:39:18.284  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:39:18.543  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:39:18.543  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:39:18.543  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:39:18.802   00:20:34 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:39:18.802   00:20:34 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:39:18.802   00:20:34 nvmf_dif -- nvmf/common.sh@297 -- # iptr
00:39:18.802   00:20:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save
00:39:18.802   00:20:34 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:39:18.802   00:20:34 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore
00:39:18.802   00:20:34 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:39:18.802   00:20:34 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns
00:39:18.802   00:20:34 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:39:18.802   00:20:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:39:18.802    00:20:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:39:20.707   00:20:36 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:39:20.707  
00:39:20.707  real	1m13.779s
00:39:20.707  user	7m9.420s
00:39:20.707  sys	0m20.428s
00:39:20.707   00:20:36 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable
00:39:20.707   00:20:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x
00:39:20.707  ************************************
00:39:20.707  END TEST nvmf_dif
00:39:20.707  ************************************
00:39:20.707   00:20:36  -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh
00:39:20.707   00:20:36  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:39:20.707   00:20:36  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:39:20.707   00:20:36  -- common/autotest_common.sh@10 -- # set +x
00:39:20.967  ************************************
00:39:20.967  START TEST nvmf_abort_qd_sizes
00:39:20.967  ************************************
00:39:20.967   00:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh
00:39:20.967  * Looking for test storage...
00:39:20.967  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:39:20.967     00:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version
00:39:20.967     00:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-:
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-:
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<'
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 ))
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:39:20.967     00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1
00:39:20.967     00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1
00:39:20.967     00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:39:20.967     00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1
00:39:20.967     00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2
00:39:20.967     00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2
00:39:20.967     00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:39:20.967     00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:39:20.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:39:20.967  		--rc genhtml_branch_coverage=1
00:39:20.967  		--rc genhtml_function_coverage=1
00:39:20.967  		--rc genhtml_legend=1
00:39:20.967  		--rc geninfo_all_blocks=1
00:39:20.967  		--rc geninfo_unexecuted_blocks=1
00:39:20.967  		
00:39:20.967  		'
00:39:20.967    00:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:39:20.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:39:20.967  		--rc genhtml_branch_coverage=1
00:39:20.967  		--rc genhtml_function_coverage=1
00:39:20.967  		--rc genhtml_legend=1
00:39:20.968  		--rc geninfo_all_blocks=1
00:39:20.968  		--rc geninfo_unexecuted_blocks=1
00:39:20.968  		
00:39:20.968  		'
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:39:20.968  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:39:20.968  		--rc genhtml_branch_coverage=1
00:39:20.968  		--rc genhtml_function_coverage=1
00:39:20.968  		--rc genhtml_legend=1
00:39:20.968  		--rc geninfo_all_blocks=1
00:39:20.968  		--rc geninfo_unexecuted_blocks=1
00:39:20.968  		
00:39:20.968  		'
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:39:20.968  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:39:20.968  		--rc genhtml_branch_coverage=1
00:39:20.968  		--rc genhtml_function_coverage=1
00:39:20.968  		--rc genhtml_legend=1
00:39:20.968  		--rc geninfo_all_blocks=1
00:39:20.968  		--rc geninfo_unexecuted_blocks=1
00:39:20.968  		
00:39:20.968  		'
00:39:20.968   00:20:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:39:20.968     00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:39:20.968     00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:39:20.968     00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob
00:39:20.968     00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:39:20.968     00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:39:20.968     00:20:36 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:39:20.968      00:20:36 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:39:20.968      00:20:36 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:39:20.968      00:20:36 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:39:20.968      00:20:36 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH
00:39:20.968      00:20:36 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:39:20.968  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0
00:39:20.968   00:20:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit
00:39:20.968   00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']'
00:39:20.968   00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT
00:39:20.968   00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs
00:39:20.968   00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no
00:39:20.968   00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns
00:39:20.968   00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:39:20.968   00:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:39:20.968    00:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:39:20.968   00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]]
00:39:20.968   00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs
00:39:20.968   00:20:36 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable
00:39:20.968   00:20:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=()
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=()
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=()
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=()
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=()
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=()
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=()
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]})
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]})
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]})
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]})
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]})
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]})
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]})
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]})
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]})
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]})
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]})
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]})
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}")
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}")
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 ))
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)'
00:39:27.721  Found 0000:af:00.0 (0x8086 - 0x159b)
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}"
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)'
00:39:27.721  Found 0000:af:00.1 (0x8086 - 0x159b)
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 ))
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]]
00:39:27.721   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0'
00:39:27.722  Found net devices under 0000:af:00.0: cvl_0_0
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}"
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*)
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]]
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}"
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]]
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 ))
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}")
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1'
00:39:27.722  Found net devices under 0000:af:00.1: cvl_0_1
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}")
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 ))
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]]
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]]
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}")
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 ))
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP=
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP=
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE")
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT'
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2
00:39:27.722  PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
00:39:27.722  64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms
00:39:27.722  
00:39:27.722  --- 10.0.0.2 ping statistics ---
00:39:27.722  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:39:27.722  rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1
00:39:27.722  PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
00:39:27.722  64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms
00:39:27.722  
00:39:27.722  --- 10.0.0.1 ping statistics ---
00:39:27.722  1 packets transmitted, 1 received, 0% packet loss, time 0ms
00:39:27.722  rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}")
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']'
00:39:27.722   00:20:42 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:39:29.627  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:39:29.627  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:39:29.627  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:39:29.627  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:39:29.627  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:39:29.627  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:39:29.890  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:39:29.890  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:39:29.890  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:39:29.890  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:39:29.890  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:39:29.890  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:39:29.890  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:39:29.890  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:39:29.890  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:39:29.890  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:39:30.836  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp'
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]]
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]]
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o'
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']'
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3360473
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3360473
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3360473 ']'
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:39:30.836  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable
00:39:30.836   00:20:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:39:30.836  [2024-12-10 00:20:46.586473] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:39:30.836  [2024-12-10 00:20:46.586514] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:39:30.837  [2024-12-10 00:20:46.665258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:39:31.097  [2024-12-10 00:20:46.706663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
00:39:31.097  [2024-12-10 00:20:46.706701] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime.
00:39:31.097  [2024-12-10 00:20:46.706708] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:39:31.097  [2024-12-10 00:20:46.706713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:39:31.097  [2024-12-10 00:20:46.706719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug.
00:39:31.097  [2024-12-10 00:20:46.708204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:39:31.097  [2024-12-10 00:20:46.708206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:39:31.097  [2024-12-10 00:20:46.708311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:39:31.097  [2024-12-10 00:20:46.708312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:39:31.662   00:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:39:31.662   00:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0
00:39:31.662   00:20:47 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt
00:39:31.662   00:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable
00:39:31.662   00:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:39:31.663   00:20:47 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT
00:39:31.663   00:20:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT
00:39:31.663   00:20:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes
00:39:31.663    00:20:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace
00:39:31.663    00:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs
00:39:31.663    00:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes
00:39:31.663    00:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]]
00:39:31.663    00:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]})
00:39:31.663    00:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:39:31.663    00:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]]
00:39:31.663     00:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s
00:39:31.663    00:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:39:31.663    00:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:39:31.663    00:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 ))
00:39:31.663    00:20:47 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0
00:39:31.663   00:20:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 ))
00:39:31.663   00:20:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0
00:39:31.663   00:20:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target
00:39:31.663   00:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:39:31.663   00:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable
00:39:31.663   00:20:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:39:31.663  ************************************
00:39:31.663  START TEST spdk_target_abort
00:39:31.663  ************************************
00:39:31.663   00:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target
00:39:31.663   00:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target
00:39:31.663   00:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target
00:39:31.663   00:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:31.663   00:20:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:39:34.940  spdk_targetn1
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:39:34.940  [2024-12-10 00:20:50.330650] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:39:34.940  [2024-12-10 00:20:50.382974] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 ***
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64)
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4'
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2'
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420'
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:39:34.940   00:20:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:39:38.221  Initializing NVMe Controllers
00:39:38.221  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn
00:39:38.221  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:39:38.221  Initialization complete. Launching workers.
00:39:38.221  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16022, failed: 0
00:39:38.221  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1394, failed to submit 14628
00:39:38.221  	 success 724, unsuccessful 670, failed 0
00:39:38.221   00:20:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:39:38.221   00:20:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:39:41.497  Initializing NVMe Controllers
00:39:41.497  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn
00:39:41.497  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:39:41.497  Initialization complete. Launching workers.
00:39:41.497  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8783, failed: 0
00:39:41.497  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1251, failed to submit 7532
00:39:41.497  	 success 339, unsuccessful 912, failed 0
00:39:41.497   00:20:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:39:41.497   00:20:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:39:44.769  Initializing NVMe Controllers
00:39:44.769  Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn
00:39:44.769  Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:39:44.769  Initialization complete. Launching workers.
00:39:44.769  NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38619, failed: 0
00:39:44.769  CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2816, failed to submit 35803
00:39:44.769  	 success 599, unsuccessful 2217, failed 0
00:39:44.769   00:21:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn
00:39:44.769   00:21:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:44.769   00:21:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:39:44.769   00:21:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:44.769   00:21:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target
00:39:44.769   00:21:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable
00:39:44.769   00:21:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:39:45.699   00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:39:45.699   00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3360473
00:39:45.699   00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3360473 ']'
00:39:45.699   00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3360473
00:39:45.699    00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname
00:39:45.699   00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:39:45.699    00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3360473
00:39:45.699   00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:39:45.699   00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:39:45.699   00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3360473'
00:39:45.699  killing process with pid 3360473
00:39:45.699   00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3360473
00:39:45.699   00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3360473
00:39:45.957  
00:39:45.957  real	0m14.108s
00:39:45.957  user	0m56.069s
00:39:45.957  sys	0m2.702s
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x
00:39:45.957  ************************************
00:39:45.957  END TEST spdk_target_abort
00:39:45.957  ************************************
00:39:45.957   00:21:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target
00:39:45.957   00:21:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:39:45.957   00:21:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable
00:39:45.957   00:21:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:39:45.957  ************************************
00:39:45.957  START TEST kernel_target_abort
00:39:45.957  ************************************
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target
00:39:45.957    00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip
00:39:45.957    00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip
00:39:45.957    00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=()
00:39:45.957    00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates
00:39:45.957    00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP
00:39:45.957    00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP
00:39:45.957    00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]]
00:39:45.957    00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]]
00:39:45.957    00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP
00:39:45.957    00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]]
00:39:45.957    00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]]
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]]
00:39:45.957   00:21:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:39:48.492  Waiting for block devices as requested
00:39:48.751  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:39:48.751  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:39:48.751  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:39:49.010  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:39:49.010  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:39:49.010  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:39:49.269  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:39:49.269  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:39:49.269  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:39:49.269  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:39:49.528  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:39:49.528  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:39:49.528  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:39:49.786  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:39:49.786  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:39:49.786  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:39:50.045  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme*
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]]
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1
00:39:50.045  No valid GPT data, bailing
00:39:50.045    00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt=
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]]
00:39:50.045   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:39:50.046   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:39:50.046   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1
00:39:50.046   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn
00:39:50.046   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1
00:39:50.046   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1
00:39:50.046   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1
00:39:50.046   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1
00:39:50.046   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp
00:39:50.046   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420
00:39:50.046   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4
00:39:50.046   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/
00:39:50.046   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420
00:39:50.304  
00:39:50.304  Discovery Log Number of Records 2, Generation counter 2
00:39:50.304  =====Discovery Log Entry 0======
00:39:50.304  trtype:  tcp
00:39:50.304  adrfam:  ipv4
00:39:50.304  subtype: current discovery subsystem
00:39:50.304  treq:    not specified, sq flow control disable supported
00:39:50.304  portid:  1
00:39:50.304  trsvcid: 4420
00:39:50.304  subnqn:  nqn.2014-08.org.nvmexpress.discovery
00:39:50.304  traddr:  10.0.0.1
00:39:50.304  eflags:  none
00:39:50.304  sectype: none
00:39:50.304  =====Discovery Log Entry 1======
00:39:50.304  trtype:  tcp
00:39:50.304  adrfam:  ipv4
00:39:50.304  subtype: nvme subsystem
00:39:50.304  treq:    not specified, sq flow control disable supported
00:39:50.304  portid:  1
00:39:50.304  trsvcid: 4420
00:39:50.304  subnqn:  nqn.2016-06.io.spdk:testnqn
00:39:50.304  traddr:  10.0.0.1
00:39:50.304  eflags:  none
00:39:50.304  sectype: none
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64)
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4'
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1'
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420'
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:39:50.304   00:21:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:39:53.586  Initializing NVMe Controllers
00:39:53.586  Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn
00:39:53.586  Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:39:53.586  Initialization complete. Launching workers.
00:39:53.586  NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79948, failed: 0
00:39:53.586  CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 79948, failed to submit 0
00:39:53.586  	 success 0, unsuccessful 79948, failed 0
00:39:53.586   00:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:39:53.586   00:21:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:39:56.864  Initializing NVMe Controllers
00:39:56.864  Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn
00:39:56.864  Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:39:56.864  Initialization complete. Launching workers.
00:39:56.864  NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145269, failed: 0
00:39:56.864  CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28106, failed to submit 117163
00:39:56.864  	 success 0, unsuccessful 28106, failed 0
00:39:56.864   00:21:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}"
00:39:56.864   00:21:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn'
00:40:00.145  Initializing NVMe Controllers
00:40:00.145  Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn
00:40:00.145  Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0
00:40:00.145  Initialization complete. Launching workers.
00:40:00.145  NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 132454, failed: 0
00:40:00.145  CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33130, failed to submit 99324
00:40:00.145  	 success 0, unsuccessful 33130, failed 0
00:40:00.145   00:21:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target
00:40:00.145   00:21:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]]
00:40:00.145   00:21:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0
00:40:00.145   00:21:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn
00:40:00.145   00:21:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1
00:40:00.145   00:21:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1
00:40:00.145   00:21:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn
00:40:00.145   00:21:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*)
00:40:00.145   00:21:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet
00:40:00.145   00:21:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh
00:40:02.681  0000:00:04.7 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:00:04.6 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:00:04.5 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:00:04.4 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:00:04.3 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:00:04.2 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:00:04.1 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:00:04.0 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:80:04.7 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:80:04.6 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:80:04.5 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:80:04.4 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:80:04.3 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:80:04.2 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:80:04.1 (8086 2021): ioatdma -> vfio-pci
00:40:02.681  0000:80:04.0 (8086 2021): ioatdma -> vfio-pci
00:40:03.248  0000:5e:00.0 (8086 0a54): nvme -> vfio-pci
00:40:03.505  
00:40:03.505  real	0m17.489s
00:40:03.505  user	0m8.525s
00:40:03.505  sys	0m5.330s
00:40:03.505   00:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable
00:40:03.505   00:21:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x
00:40:03.505  ************************************
00:40:03.505  END TEST kernel_target_abort
00:40:03.505  ************************************
00:40:03.505   00:21:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:40:03.505   00:21:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini
00:40:03.505   00:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup
00:40:03.505   00:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync
00:40:03.505   00:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']'
00:40:03.505   00:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e
00:40:03.505   00:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20}
00:40:03.505   00:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp
00:40:03.505  rmmod nvme_tcp
00:40:03.505  rmmod nvme_fabrics
00:40:03.506  rmmod nvme_keyring
00:40:03.506   00:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics
00:40:03.506   00:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e
00:40:03.506   00:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0
00:40:03.506   00:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3360473 ']'
00:40:03.506   00:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3360473
00:40:03.506   00:21:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3360473 ']'
00:40:03.506   00:21:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3360473
00:40:03.506  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3360473) - No such process
00:40:03.506   00:21:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3360473 is not found'
00:40:03.506  Process with pid 3360473 is not found
00:40:03.506   00:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']'
00:40:03.506   00:21:19 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset
00:40:06.040  Waiting for block devices as requested
00:40:06.299  0000:5e:00.0 (8086 0a54): vfio-pci -> nvme
00:40:06.299  0000:00:04.7 (8086 2021): vfio-pci -> ioatdma
00:40:06.557  0000:00:04.6 (8086 2021): vfio-pci -> ioatdma
00:40:06.557  0000:00:04.5 (8086 2021): vfio-pci -> ioatdma
00:40:06.557  0000:00:04.4 (8086 2021): vfio-pci -> ioatdma
00:40:06.557  0000:00:04.3 (8086 2021): vfio-pci -> ioatdma
00:40:06.815  0000:00:04.2 (8086 2021): vfio-pci -> ioatdma
00:40:06.815  0000:00:04.1 (8086 2021): vfio-pci -> ioatdma
00:40:06.815  0000:00:04.0 (8086 2021): vfio-pci -> ioatdma
00:40:07.074  0000:80:04.7 (8086 2021): vfio-pci -> ioatdma
00:40:07.074  0000:80:04.6 (8086 2021): vfio-pci -> ioatdma
00:40:07.074  0000:80:04.5 (8086 2021): vfio-pci -> ioatdma
00:40:07.333  0000:80:04.4 (8086 2021): vfio-pci -> ioatdma
00:40:07.333  0000:80:04.3 (8086 2021): vfio-pci -> ioatdma
00:40:07.333  0000:80:04.2 (8086 2021): vfio-pci -> ioatdma
00:40:07.333  0000:80:04.1 (8086 2021): vfio-pci -> ioatdma
00:40:07.592  0000:80:04.0 (8086 2021): vfio-pci -> ioatdma
00:40:07.592   00:21:23 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]]
00:40:07.592   00:21:23 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini
00:40:07.592   00:21:23 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr
00:40:07.592   00:21:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save
00:40:07.592   00:21:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF
00:40:07.592   00:21:23 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore
00:40:07.592   00:21:23 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]]
00:40:07.592   00:21:23 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns
00:40:07.592   00:21:23 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns
00:40:07.592   00:21:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null'
00:40:07.592    00:21:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns
00:40:10.124   00:21:25 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1
00:40:10.124  
00:40:10.124  real	0m48.820s
00:40:10.124  user	1m9.090s
00:40:10.124  sys	0m16.688s
00:40:10.124   00:21:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:40:10.124   00:21:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x
00:40:10.124  ************************************
00:40:10.124  END TEST nvmf_abort_qd_sizes
00:40:10.124  ************************************
00:40:10.125   00:21:25  -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh
00:40:10.125   00:21:25  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:40:10.125   00:21:25  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:40:10.125   00:21:25  -- common/autotest_common.sh@10 -- # set +x
00:40:10.125  ************************************
00:40:10.125  START TEST keyring_file
00:40:10.125  ************************************
00:40:10.125   00:21:25 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh
00:40:10.125  * Looking for test storage...
00:40:10.125  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring
00:40:10.125    00:21:25 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:40:10.125     00:21:25 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version
00:40:10.125     00:21:25 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:40:10.125    00:21:25 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@336 -- # IFS=.-:
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@336 -- # read -ra ver1
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@337 -- # IFS=.-:
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@337 -- # read -ra ver2
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@338 -- # local 'op=<'
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@340 -- # ver1_l=2
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@341 -- # ver2_l=1
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@344 -- # case "$op" in
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@345 -- # : 1
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@364 -- # (( v = 0 ))
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:40:10.125     00:21:25 keyring_file -- scripts/common.sh@365 -- # decimal 1
00:40:10.125     00:21:25 keyring_file -- scripts/common.sh@353 -- # local d=1
00:40:10.125     00:21:25 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:40:10.125     00:21:25 keyring_file -- scripts/common.sh@355 -- # echo 1
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1
00:40:10.125     00:21:25 keyring_file -- scripts/common.sh@366 -- # decimal 2
00:40:10.125     00:21:25 keyring_file -- scripts/common.sh@353 -- # local d=2
00:40:10.125     00:21:25 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:40:10.125     00:21:25 keyring_file -- scripts/common.sh@355 -- # echo 2
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:40:10.125    00:21:25 keyring_file -- scripts/common.sh@368 -- # return 0
00:40:10.125    00:21:25 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:40:10.125    00:21:25 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:40:10.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:40:10.125  		--rc genhtml_branch_coverage=1
00:40:10.125  		--rc genhtml_function_coverage=1
00:40:10.125  		--rc genhtml_legend=1
00:40:10.125  		--rc geninfo_all_blocks=1
00:40:10.125  		--rc geninfo_unexecuted_blocks=1
00:40:10.125  		
00:40:10.125  		'
00:40:10.125    00:21:25 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:40:10.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:40:10.125  		--rc genhtml_branch_coverage=1
00:40:10.125  		--rc genhtml_function_coverage=1
00:40:10.125  		--rc genhtml_legend=1
00:40:10.125  		--rc geninfo_all_blocks=1
00:40:10.125  		--rc geninfo_unexecuted_blocks=1
00:40:10.125  		
00:40:10.125  		'
00:40:10.125    00:21:25 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:40:10.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:40:10.125  		--rc genhtml_branch_coverage=1
00:40:10.125  		--rc genhtml_function_coverage=1
00:40:10.125  		--rc genhtml_legend=1
00:40:10.125  		--rc geninfo_all_blocks=1
00:40:10.125  		--rc geninfo_unexecuted_blocks=1
00:40:10.125  		
00:40:10.125  		'
00:40:10.125    00:21:25 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:40:10.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:40:10.125  		--rc genhtml_branch_coverage=1
00:40:10.125  		--rc genhtml_function_coverage=1
00:40:10.125  		--rc genhtml_legend=1
00:40:10.125  		--rc geninfo_all_blocks=1
00:40:10.125  		--rc geninfo_unexecuted_blocks=1
00:40:10.125  		
00:40:10.125  		'
00:40:10.125   00:21:25 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh
00:40:10.125    00:21:25 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:40:10.125      00:21:25 keyring_file -- nvmf/common.sh@7 -- # uname -s
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:40:10.125      00:21:25 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:40:10.125      00:21:25 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob
00:40:10.125      00:21:25 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:40:10.125      00:21:25 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:40:10.125      00:21:25 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:40:10.125       00:21:25 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:40:10.125       00:21:25 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:40:10.125       00:21:25 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:40:10.125       00:21:25 keyring_file -- paths/export.sh@5 -- # export PATH
00:40:10.125       00:21:25 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@51 -- # : 0
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:40:10.125     00:21:25 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:40:10.126  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:40:10.126     00:21:25 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:40:10.126     00:21:25 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:40:10.126     00:21:25 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock
00:40:10.126   00:21:25 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0
00:40:10.126   00:21:25 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0
00:40:10.126   00:21:25 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff
00:40:10.126   00:21:25 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00
00:40:10.126   00:21:25 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT
00:40:10.126    00:21:25 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@17 -- # name=key0
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@17 -- # digest=0
00:40:10.126     00:21:25 keyring_file -- keyring/common.sh@18 -- # mktemp
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ig8kLBn0tq
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0
00:40:10.126    00:21:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0
00:40:10.126    00:21:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest
00:40:10.126    00:21:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:40:10.126    00:21:25 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff
00:40:10.126    00:21:25 keyring_file -- nvmf/common.sh@732 -- # digest=0
00:40:10.126    00:21:25 keyring_file -- nvmf/common.sh@733 -- # python -
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ig8kLBn0tq
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ig8kLBn0tq
00:40:10.126   00:21:25 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.ig8kLBn0tq
00:40:10.126    00:21:25 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@15 -- # local name key digest path
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@17 -- # name=key1
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@17 -- # digest=0
00:40:10.126     00:21:25 keyring_file -- keyring/common.sh@18 -- # mktemp
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.q3nrrN5rWU
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0
00:40:10.126    00:21:25 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0
00:40:10.126    00:21:25 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest
00:40:10.126    00:21:25 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:40:10.126    00:21:25 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00
00:40:10.126    00:21:25 keyring_file -- nvmf/common.sh@732 -- # digest=0
00:40:10.126    00:21:25 keyring_file -- nvmf/common.sh@733 -- # python -
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.q3nrrN5rWU
00:40:10.126    00:21:25 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.q3nrrN5rWU
00:40:10.126   00:21:25 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.q3nrrN5rWU
00:40:10.126   00:21:25 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:40:10.126   00:21:25 keyring_file -- keyring/file.sh@30 -- # tgtpid=3369660
00:40:10.126   00:21:25 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3369660
00:40:10.126   00:21:25 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3369660 ']'
00:40:10.126   00:21:25 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:40:10.126   00:21:25 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100
00:40:10.126   00:21:25 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:40:10.126  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:40:10.126   00:21:25 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable
00:40:10.126   00:21:25 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:40:10.126  [2024-12-10 00:21:25.827742] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:40:10.126  [2024-12-10 00:21:25.827791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369660 ]
00:40:10.126  [2024-12-10 00:21:25.899563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:40:10.126  [2024-12-10 00:21:25.940801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@868 -- # return 0
00:40:10.383   00:21:26 keyring_file -- keyring/file.sh@33 -- # rpc_cmd
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:40:10.383  [2024-12-10 00:21:26.150141] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:40:10.383  null0
00:40:10.383  [2024-12-10 00:21:26.182206] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:40:10.383  [2024-12-10 00:21:26.182482] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:40:10.383   00:21:26 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:40:10.383    00:21:26 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:40:10.383  [2024-12-10 00:21:26.210264] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists
00:40:10.383  request:
00:40:10.383  {
00:40:10.383  "nqn": "nqn.2016-06.io.spdk:cnode0",
00:40:10.383  "secure_channel": false,
00:40:10.383  "listen_address": {
00:40:10.383  "trtype": "tcp",
00:40:10.383  "traddr": "127.0.0.1",
00:40:10.383  "trsvcid": "4420"
00:40:10.383  },
00:40:10.383  "method": "nvmf_subsystem_add_listener",
00:40:10.383  "req_id": 1
00:40:10.383  }
00:40:10.383  Got JSON-RPC error response
00:40:10.383  response:
00:40:10.383  {
00:40:10.383  "code": -32602,
00:40:10.383  "message": "Invalid parameters"
00:40:10.383  }
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@655 -- # es=1
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:40:10.383   00:21:26 keyring_file -- keyring/file.sh@47 -- # bperfpid=3369671
00:40:10.383   00:21:26 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3369671 /var/tmp/bperf.sock
00:40:10.383   00:21:26 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3369671 ']'
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:40:10.383  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable
00:40:10.383   00:21:26 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:40:10.641  [2024-12-10 00:21:26.264419] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:40:10.641  [2024-12-10 00:21:26.264461] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369671 ]
00:40:10.641  [2024-12-10 00:21:26.339075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:40:10.642  [2024-12-10 00:21:26.378484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:40:10.642   00:21:26 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:40:10.642   00:21:26 keyring_file -- common/autotest_common.sh@868 -- # return 0
00:40:10.642   00:21:26 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ig8kLBn0tq
00:40:10.642   00:21:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ig8kLBn0tq
00:40:10.899   00:21:26 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.q3nrrN5rWU
00:40:10.899   00:21:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.q3nrrN5rWU
00:40:11.156    00:21:26 keyring_file -- keyring/file.sh@52 -- # get_key key0
00:40:11.156    00:21:26 keyring_file -- keyring/file.sh@52 -- # jq -r .path
00:40:11.156    00:21:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:11.156    00:21:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:40:11.156    00:21:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:11.414   00:21:27 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ig8kLBn0tq == \/\t\m\p\/\t\m\p\.\i\g\8\k\L\B\n\0\t\q ]]
00:40:11.414    00:21:27 keyring_file -- keyring/file.sh@53 -- # get_key key1
00:40:11.414    00:21:27 keyring_file -- keyring/file.sh@53 -- # jq -r .path
00:40:11.414    00:21:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:11.414    00:21:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:40:11.414    00:21:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:11.414   00:21:27 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.q3nrrN5rWU == \/\t\m\p\/\t\m\p\.\q\3\n\r\r\N\5\r\W\U ]]
00:40:11.414    00:21:27 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0
00:40:11.414    00:21:27 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:40:11.414    00:21:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:40:11.414    00:21:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:11.414    00:21:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:40:11.414    00:21:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:11.671   00:21:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 ))
00:40:11.671    00:21:27 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1
00:40:11.671    00:21:27 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:40:11.671    00:21:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:40:11.671    00:21:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:11.671    00:21:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:40:11.671    00:21:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:11.928   00:21:27 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 ))
00:40:11.928   00:21:27 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:40:11.928   00:21:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:40:12.185  [2024-12-10 00:21:27.822224] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:40:12.185  nvme0n1
00:40:12.185    00:21:27 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0
00:40:12.185    00:21:27 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:40:12.185    00:21:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:40:12.185    00:21:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:12.185    00:21:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:40:12.185    00:21:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:12.442   00:21:28 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 ))
00:40:12.442    00:21:28 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1
00:40:12.442    00:21:28 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:40:12.442    00:21:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:40:12.442    00:21:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:12.442    00:21:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:40:12.442    00:21:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:12.442   00:21:28 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 ))
00:40:12.442   00:21:28 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:40:12.700  Running I/O for 1 seconds...
00:40:13.632      19240.00 IOPS,    75.16 MiB/s
00:40:13.632                                                                                                  Latency(us)
00:40:13.632  
[2024-12-09T23:21:29.489Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:40:13.632  Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096)
00:40:13.632  	 nvme0n1             :       1.00   19288.62      75.35       0.00     0.00    6624.23    2590.23   12358.22
00:40:13.632  
[2024-12-09T23:21:29.489Z]  ===================================================================================================================
00:40:13.632  
[2024-12-09T23:21:29.489Z]  Total                       :              19288.62      75.35       0.00     0.00    6624.23    2590.23   12358.22
00:40:13.632  {
00:40:13.632    "results": [
00:40:13.632      {
00:40:13.632        "job": "nvme0n1",
00:40:13.632        "core_mask": "0x2",
00:40:13.632        "workload": "randrw",
00:40:13.632        "percentage": 50,
00:40:13.632        "status": "finished",
00:40:13.632        "queue_depth": 128,
00:40:13.632        "io_size": 4096,
00:40:13.632        "runtime": 1.004167,
00:40:13.632        "iops": 19288.624302531352,
00:40:13.632        "mibps": 75.3461886817631,
00:40:13.632        "io_failed": 0,
00:40:13.632        "io_timeout": 0,
00:40:13.632        "avg_latency_us": 6624.232836270034,
00:40:13.632        "min_latency_us": 2590.232380952381,
00:40:13.632        "max_latency_us": 12358.217142857144
00:40:13.632      }
00:40:13.632    ],
00:40:13.632    "core_count": 1
00:40:13.632  }
00:40:13.632   00:21:29 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0
00:40:13.632   00:21:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0
00:40:13.890    00:21:29 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0
00:40:13.890    00:21:29 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:40:13.890    00:21:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:40:13.890    00:21:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:13.890    00:21:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:40:13.890    00:21:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:14.147   00:21:29 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 ))
00:40:14.147    00:21:29 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1
00:40:14.147    00:21:29 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:40:14.147    00:21:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:40:14.147    00:21:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:14.147    00:21:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:14.147    00:21:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:40:14.148   00:21:29 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 ))
00:40:14.148   00:21:29 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1
00:40:14.148   00:21:29 keyring_file -- common/autotest_common.sh@652 -- # local es=0
00:40:14.148   00:21:29 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1
00:40:14.148   00:21:29 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd
00:40:14.148   00:21:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:40:14.148    00:21:29 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd
00:40:14.148   00:21:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:40:14.148   00:21:29 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1
00:40:14.148   00:21:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1
00:40:14.407  [2024-12-10 00:21:30.174449] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:40:14.407  [2024-12-10 00:21:30.174635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c7470 (107): Transport endpoint is not connected
00:40:14.407  [2024-12-10 00:21:30.175629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c7470 (9): Bad file descriptor
00:40:14.407  [2024-12-10 00:21:30.176630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state
00:40:14.407  [2024-12-10 00:21:30.176640] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1
00:40:14.407  [2024-12-10 00:21:30.176647] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted
00:40:14.407  [2024-12-10 00:21:30.176655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state.
00:40:14.407  request:
00:40:14.407  {
00:40:14.407    "name": "nvme0",
00:40:14.407    "trtype": "tcp",
00:40:14.407    "traddr": "127.0.0.1",
00:40:14.407    "adrfam": "ipv4",
00:40:14.407    "trsvcid": "4420",
00:40:14.407    "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:40:14.407    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:40:14.407    "prchk_reftag": false,
00:40:14.407    "prchk_guard": false,
00:40:14.407    "hdgst": false,
00:40:14.407    "ddgst": false,
00:40:14.407    "psk": "key1",
00:40:14.407    "allow_unrecognized_csi": false,
00:40:14.407    "method": "bdev_nvme_attach_controller",
00:40:14.407    "req_id": 1
00:40:14.407  }
00:40:14.407  Got JSON-RPC error response
00:40:14.407  response:
00:40:14.407  {
00:40:14.407    "code": -5,
00:40:14.407    "message": "Input/output error"
00:40:14.407  }
00:40:14.407   00:21:30 keyring_file -- common/autotest_common.sh@655 -- # es=1
00:40:14.407   00:21:30 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:40:14.407   00:21:30 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:40:14.407   00:21:30 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:40:14.407    00:21:30 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0
00:40:14.407    00:21:30 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:40:14.407    00:21:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:40:14.407    00:21:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:14.407    00:21:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:40:14.407    00:21:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:14.662   00:21:30 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 ))
00:40:14.663    00:21:30 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1
00:40:14.663    00:21:30 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:40:14.663    00:21:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:40:14.663    00:21:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:14.663    00:21:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:40:14.663    00:21:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:14.919   00:21:30 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 ))
00:40:14.919   00:21:30 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0
00:40:14.919   00:21:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0
00:40:15.176   00:21:30 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1
00:40:15.176   00:21:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1
00:40:15.176    00:21:30 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys
00:40:15.176    00:21:30 keyring_file -- keyring/file.sh@78 -- # jq length
00:40:15.176    00:21:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:15.433   00:21:31 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 ))
00:40:15.433   00:21:31 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.ig8kLBn0tq
00:40:15.433   00:21:31 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.ig8kLBn0tq
00:40:15.433   00:21:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0
00:40:15.433   00:21:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.ig8kLBn0tq
00:40:15.433   00:21:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd
00:40:15.433   00:21:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:40:15.433    00:21:31 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd
00:40:15.433   00:21:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:40:15.433   00:21:31 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ig8kLBn0tq
00:40:15.433   00:21:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ig8kLBn0tq
00:40:15.689  [2024-12-10 00:21:31.347712] keyring.c:  36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.ig8kLBn0tq': 0100660
00:40:15.689  [2024-12-10 00:21:31.347738] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:40:15.689  request:
00:40:15.689  {
00:40:15.689    "name": "key0",
00:40:15.689    "path": "/tmp/tmp.ig8kLBn0tq",
00:40:15.689    "method": "keyring_file_add_key",
00:40:15.689    "req_id": 1
00:40:15.689  }
00:40:15.689  Got JSON-RPC error response
00:40:15.689  response:
00:40:15.689  {
00:40:15.689    "code": -1,
00:40:15.689    "message": "Operation not permitted"
00:40:15.689  }
00:40:15.689   00:21:31 keyring_file -- common/autotest_common.sh@655 -- # es=1
00:40:15.689   00:21:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:40:15.689   00:21:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:40:15.689   00:21:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:40:15.689   00:21:31 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.ig8kLBn0tq
00:40:15.689   00:21:31 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.ig8kLBn0tq
00:40:15.689   00:21:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.ig8kLBn0tq
00:40:15.945   00:21:31 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.ig8kLBn0tq
00:40:15.945    00:21:31 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0
00:40:15.945    00:21:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:40:15.945    00:21:31 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:40:15.945    00:21:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:15.945    00:21:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:40:15.945    00:21:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:15.945   00:21:31 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 ))
00:40:15.945   00:21:31 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:40:15.945   00:21:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0
00:40:15.945   00:21:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:40:15.945   00:21:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd
00:40:15.945   00:21:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:40:15.945    00:21:31 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd
00:40:15.945   00:21:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:40:15.945   00:21:31 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:40:15.945   00:21:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:40:16.202  [2024-12-10 00:21:31.965336] keyring.c:  31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.ig8kLBn0tq': No such file or directory
00:40:16.202  [2024-12-10 00:21:31.965361] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory
00:40:16.202  [2024-12-10 00:21:31.965377] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1
00:40:16.202  [2024-12-10 00:21:31.965384] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device
00:40:16.202  [2024-12-10 00:21:31.965390] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:40:16.202  [2024-12-10 00:21:31.965397] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1)
00:40:16.202  request:
00:40:16.202  {
00:40:16.202    "name": "nvme0",
00:40:16.202    "trtype": "tcp",
00:40:16.202    "traddr": "127.0.0.1",
00:40:16.202    "adrfam": "ipv4",
00:40:16.202    "trsvcid": "4420",
00:40:16.202    "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:40:16.202    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:40:16.202    "prchk_reftag": false,
00:40:16.202    "prchk_guard": false,
00:40:16.202    "hdgst": false,
00:40:16.202    "ddgst": false,
00:40:16.202    "psk": "key0",
00:40:16.202    "allow_unrecognized_csi": false,
00:40:16.202    "method": "bdev_nvme_attach_controller",
00:40:16.202    "req_id": 1
00:40:16.202  }
00:40:16.202  Got JSON-RPC error response
00:40:16.202  response:
00:40:16.202  {
00:40:16.202    "code": -19,
00:40:16.202    "message": "No such device"
00:40:16.202  }
00:40:16.202   00:21:31 keyring_file -- common/autotest_common.sh@655 -- # es=1
00:40:16.202   00:21:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:40:16.202   00:21:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:40:16.202   00:21:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:40:16.202   00:21:31 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0
00:40:16.202   00:21:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0
00:40:16.459    00:21:32 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0
00:40:16.459    00:21:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path
00:40:16.459    00:21:32 keyring_file -- keyring/common.sh@17 -- # name=key0
00:40:16.459    00:21:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff
00:40:16.459    00:21:32 keyring_file -- keyring/common.sh@17 -- # digest=0
00:40:16.459     00:21:32 keyring_file -- keyring/common.sh@18 -- # mktemp
00:40:16.459    00:21:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.H1JA2PRtn1
00:40:16.459    00:21:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0
00:40:16.459    00:21:32 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0
00:40:16.459    00:21:32 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest
00:40:16.459    00:21:32 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:40:16.459    00:21:32 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff
00:40:16.459    00:21:32 keyring_file -- nvmf/common.sh@732 -- # digest=0
00:40:16.459    00:21:32 keyring_file -- nvmf/common.sh@733 -- # python -
00:40:16.459    00:21:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.H1JA2PRtn1
00:40:16.459    00:21:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.H1JA2PRtn1
00:40:16.459   00:21:32 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.H1JA2PRtn1
00:40:16.459   00:21:32 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.H1JA2PRtn1
00:40:16.459   00:21:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.H1JA2PRtn1
00:40:16.715   00:21:32 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:40:16.715   00:21:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:40:16.972  nvme0n1
00:40:16.972    00:21:32 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0
00:40:16.972    00:21:32 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:40:16.972    00:21:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:40:16.972    00:21:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:16.972    00:21:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:40:16.972    00:21:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:17.229   00:21:32 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 ))
00:40:17.229   00:21:32 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0
00:40:17.229   00:21:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0
00:40:17.486    00:21:33 keyring_file -- keyring/file.sh@102 -- # get_key key0
00:40:17.486    00:21:33 keyring_file -- keyring/file.sh@102 -- # jq -r .removed
00:40:17.486    00:21:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:17.486    00:21:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:17.486    00:21:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:40:17.486   00:21:33 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]]
00:40:17.486    00:21:33 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0
00:40:17.486    00:21:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:40:17.486    00:21:33 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:40:17.486    00:21:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:40:17.486    00:21:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:17.486    00:21:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:17.742   00:21:33 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 ))
00:40:17.742   00:21:33 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0
00:40:17.742   00:21:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0
00:40:17.998    00:21:33 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys
00:40:17.998    00:21:33 keyring_file -- keyring/file.sh@105 -- # jq length
00:40:17.998    00:21:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:18.254   00:21:33 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 ))
00:40:18.254   00:21:33 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.H1JA2PRtn1
00:40:18.254   00:21:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.H1JA2PRtn1
00:40:18.254   00:21:34 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.q3nrrN5rWU
00:40:18.254   00:21:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.q3nrrN5rWU
00:40:18.511   00:21:34 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:40:18.511   00:21:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0
00:40:18.768  nvme0n1
00:40:18.768    00:21:34 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config
00:40:18.768    00:21:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config
00:40:19.025   00:21:34 keyring_file -- keyring/file.sh@113 -- # config='{
00:40:19.025    "subsystems": [
00:40:19.025      {
00:40:19.025        "subsystem": "keyring",
00:40:19.025        "config": [
00:40:19.025          {
00:40:19.025            "method": "keyring_file_add_key",
00:40:19.025            "params": {
00:40:19.025              "name": "key0",
00:40:19.025              "path": "/tmp/tmp.H1JA2PRtn1"
00:40:19.025            }
00:40:19.025          },
00:40:19.025          {
00:40:19.025            "method": "keyring_file_add_key",
00:40:19.025            "params": {
00:40:19.025              "name": "key1",
00:40:19.025              "path": "/tmp/tmp.q3nrrN5rWU"
00:40:19.025            }
00:40:19.025          }
00:40:19.025        ]
00:40:19.025      },
00:40:19.025      {
00:40:19.025        "subsystem": "iobuf",
00:40:19.025        "config": [
00:40:19.025          {
00:40:19.025            "method": "iobuf_set_options",
00:40:19.025            "params": {
00:40:19.025              "small_pool_count": 8192,
00:40:19.025              "large_pool_count": 1024,
00:40:19.025              "small_bufsize": 8192,
00:40:19.025              "large_bufsize": 135168,
00:40:19.025              "enable_numa": false
00:40:19.025            }
00:40:19.025          }
00:40:19.025        ]
00:40:19.025      },
00:40:19.025      {
00:40:19.025        "subsystem": "sock",
00:40:19.025        "config": [
00:40:19.025          {
00:40:19.025            "method": "sock_set_default_impl",
00:40:19.025            "params": {
00:40:19.025              "impl_name": "posix"
00:40:19.025            }
00:40:19.025          },
00:40:19.025          {
00:40:19.025            "method": "sock_impl_set_options",
00:40:19.025            "params": {
00:40:19.025              "impl_name": "ssl",
00:40:19.025              "recv_buf_size": 4096,
00:40:19.025              "send_buf_size": 4096,
00:40:19.025              "enable_recv_pipe": true,
00:40:19.025              "enable_quickack": false,
00:40:19.025              "enable_placement_id": 0,
00:40:19.025              "enable_zerocopy_send_server": true,
00:40:19.025              "enable_zerocopy_send_client": false,
00:40:19.025              "zerocopy_threshold": 0,
00:40:19.025              "tls_version": 0,
00:40:19.025              "enable_ktls": false
00:40:19.025            }
00:40:19.025          },
00:40:19.025          {
00:40:19.025            "method": "sock_impl_set_options",
00:40:19.025            "params": {
00:40:19.025              "impl_name": "posix",
00:40:19.025              "recv_buf_size": 2097152,
00:40:19.025              "send_buf_size": 2097152,
00:40:19.025              "enable_recv_pipe": true,
00:40:19.025              "enable_quickack": false,
00:40:19.025              "enable_placement_id": 0,
00:40:19.025              "enable_zerocopy_send_server": true,
00:40:19.025              "enable_zerocopy_send_client": false,
00:40:19.025              "zerocopy_threshold": 0,
00:40:19.025              "tls_version": 0,
00:40:19.025              "enable_ktls": false
00:40:19.025            }
00:40:19.025          }
00:40:19.025        ]
00:40:19.025      },
00:40:19.025      {
00:40:19.025        "subsystem": "vmd",
00:40:19.025        "config": []
00:40:19.025      },
00:40:19.025      {
00:40:19.025        "subsystem": "accel",
00:40:19.025        "config": [
00:40:19.025          {
00:40:19.025            "method": "accel_set_options",
00:40:19.025            "params": {
00:40:19.025              "small_cache_size": 128,
00:40:19.025              "large_cache_size": 16,
00:40:19.025              "task_count": 2048,
00:40:19.025              "sequence_count": 2048,
00:40:19.025              "buf_count": 2048
00:40:19.025            }
00:40:19.025          }
00:40:19.025        ]
00:40:19.025      },
00:40:19.025      {
00:40:19.025        "subsystem": "bdev",
00:40:19.025        "config": [
00:40:19.025          {
00:40:19.025            "method": "bdev_set_options",
00:40:19.025            "params": {
00:40:19.025              "bdev_io_pool_size": 65535,
00:40:19.025              "bdev_io_cache_size": 256,
00:40:19.025              "bdev_auto_examine": true,
00:40:19.025              "iobuf_small_cache_size": 128,
00:40:19.025              "iobuf_large_cache_size": 16
00:40:19.025            }
00:40:19.025          },
00:40:19.025          {
00:40:19.025            "method": "bdev_raid_set_options",
00:40:19.025            "params": {
00:40:19.025              "process_window_size_kb": 1024,
00:40:19.025              "process_max_bandwidth_mb_sec": 0
00:40:19.025            }
00:40:19.025          },
00:40:19.025          {
00:40:19.025            "method": "bdev_iscsi_set_options",
00:40:19.025            "params": {
00:40:19.025              "timeout_sec": 30
00:40:19.025            }
00:40:19.025          },
00:40:19.025          {
00:40:19.025            "method": "bdev_nvme_set_options",
00:40:19.025            "params": {
00:40:19.025              "action_on_timeout": "none",
00:40:19.025              "timeout_us": 0,
00:40:19.025              "timeout_admin_us": 0,
00:40:19.025              "keep_alive_timeout_ms": 10000,
00:40:19.025              "arbitration_burst": 0,
00:40:19.025              "low_priority_weight": 0,
00:40:19.025              "medium_priority_weight": 0,
00:40:19.025              "high_priority_weight": 0,
00:40:19.025              "nvme_adminq_poll_period_us": 10000,
00:40:19.025              "nvme_ioq_poll_period_us": 0,
00:40:19.026              "io_queue_requests": 512,
00:40:19.026              "delay_cmd_submit": true,
00:40:19.026              "transport_retry_count": 4,
00:40:19.026              "bdev_retry_count": 3,
00:40:19.026              "transport_ack_timeout": 0,
00:40:19.026              "ctrlr_loss_timeout_sec": 0,
00:40:19.026              "reconnect_delay_sec": 0,
00:40:19.026              "fast_io_fail_timeout_sec": 0,
00:40:19.026              "disable_auto_failback": false,
00:40:19.026              "generate_uuids": false,
00:40:19.026              "transport_tos": 0,
00:40:19.026              "nvme_error_stat": false,
00:40:19.026              "rdma_srq_size": 0,
00:40:19.026              "io_path_stat": false,
00:40:19.026              "allow_accel_sequence": false,
00:40:19.026              "rdma_max_cq_size": 0,
00:40:19.026              "rdma_cm_event_timeout_ms": 0,
00:40:19.026              "dhchap_digests": [
00:40:19.026                "sha256",
00:40:19.026                "sha384",
00:40:19.026                "sha512"
00:40:19.026              ],
00:40:19.026              "dhchap_dhgroups": [
00:40:19.026                "null",
00:40:19.026                "ffdhe2048",
00:40:19.026                "ffdhe3072",
00:40:19.026                "ffdhe4096",
00:40:19.026                "ffdhe6144",
00:40:19.026                "ffdhe8192"
00:40:19.026              ]
00:40:19.026            }
00:40:19.026          },
00:40:19.026          {
00:40:19.026            "method": "bdev_nvme_attach_controller",
00:40:19.026            "params": {
00:40:19.026              "name": "nvme0",
00:40:19.026              "trtype": "TCP",
00:40:19.026              "adrfam": "IPv4",
00:40:19.026              "traddr": "127.0.0.1",
00:40:19.026              "trsvcid": "4420",
00:40:19.026              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:40:19.026              "prchk_reftag": false,
00:40:19.026              "prchk_guard": false,
00:40:19.026              "ctrlr_loss_timeout_sec": 0,
00:40:19.026              "reconnect_delay_sec": 0,
00:40:19.026              "fast_io_fail_timeout_sec": 0,
00:40:19.026              "psk": "key0",
00:40:19.026              "hostnqn": "nqn.2016-06.io.spdk:host0",
00:40:19.026              "hdgst": false,
00:40:19.026              "ddgst": false,
00:40:19.026              "multipath": "multipath"
00:40:19.026            }
00:40:19.026          },
00:40:19.026          {
00:40:19.026            "method": "bdev_nvme_set_hotplug",
00:40:19.026            "params": {
00:40:19.026              "period_us": 100000,
00:40:19.026              "enable": false
00:40:19.026            }
00:40:19.026          },
00:40:19.026          {
00:40:19.026            "method": "bdev_wait_for_examine"
00:40:19.026          }
00:40:19.026        ]
00:40:19.026      },
00:40:19.026      {
00:40:19.026        "subsystem": "nbd",
00:40:19.026        "config": []
00:40:19.026      }
00:40:19.026    ]
00:40:19.026  }'
00:40:19.026   00:21:34 keyring_file -- keyring/file.sh@115 -- # killprocess 3369671
00:40:19.026   00:21:34 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3369671 ']'
00:40:19.026   00:21:34 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3369671
00:40:19.026    00:21:34 keyring_file -- common/autotest_common.sh@959 -- # uname
00:40:19.026   00:21:34 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:40:19.026    00:21:34 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3369671
00:40:19.026   00:21:34 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:40:19.026   00:21:34 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:40:19.026   00:21:34 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3369671'
00:40:19.026  killing process with pid 3369671
00:40:19.026   00:21:34 keyring_file -- common/autotest_common.sh@973 -- # kill 3369671
00:40:19.026  Received shutdown signal, test time was about 1.000000 seconds
00:40:19.026  
00:40:19.026                                                                                                  Latency(us)
00:40:19.026  
[2024-12-09T23:21:34.883Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:40:19.026  
[2024-12-09T23:21:34.883Z]  ===================================================================================================================
00:40:19.026  
[2024-12-09T23:21:34.883Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:40:19.026   00:21:34 keyring_file -- common/autotest_common.sh@978 -- # wait 3369671
00:40:19.294   00:21:34 keyring_file -- keyring/file.sh@118 -- # bperfpid=3371168
00:40:19.294   00:21:34 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3371168 /var/tmp/bperf.sock
00:40:19.294   00:21:34 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3371168 ']'
00:40:19.294   00:21:34 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:40:19.294   00:21:34 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63
00:40:19.294   00:21:34 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100
00:40:19.294   00:21:34 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:40:19.294  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:40:19.294    00:21:34 keyring_file -- keyring/file.sh@116 -- # echo '{
00:40:19.294    "subsystems": [
00:40:19.294      {
00:40:19.294        "subsystem": "keyring",
00:40:19.294        "config": [
00:40:19.294          {
00:40:19.294            "method": "keyring_file_add_key",
00:40:19.294            "params": {
00:40:19.294              "name": "key0",
00:40:19.294              "path": "/tmp/tmp.H1JA2PRtn1"
00:40:19.294            }
00:40:19.294          },
00:40:19.294          {
00:40:19.294            "method": "keyring_file_add_key",
00:40:19.294            "params": {
00:40:19.294              "name": "key1",
00:40:19.294              "path": "/tmp/tmp.q3nrrN5rWU"
00:40:19.294            }
00:40:19.294          }
00:40:19.294        ]
00:40:19.294      },
00:40:19.294      {
00:40:19.294        "subsystem": "iobuf",
00:40:19.294        "config": [
00:40:19.294          {
00:40:19.294            "method": "iobuf_set_options",
00:40:19.294            "params": {
00:40:19.294              "small_pool_count": 8192,
00:40:19.294              "large_pool_count": 1024,
00:40:19.294              "small_bufsize": 8192,
00:40:19.294              "large_bufsize": 135168,
00:40:19.294              "enable_numa": false
00:40:19.294            }
00:40:19.294          }
00:40:19.294        ]
00:40:19.294      },
00:40:19.294      {
00:40:19.294        "subsystem": "sock",
00:40:19.294        "config": [
00:40:19.294          {
00:40:19.294            "method": "sock_set_default_impl",
00:40:19.294            "params": {
00:40:19.294              "impl_name": "posix"
00:40:19.294            }
00:40:19.294          },
00:40:19.294          {
00:40:19.294            "method": "sock_impl_set_options",
00:40:19.294            "params": {
00:40:19.294              "impl_name": "ssl",
00:40:19.294              "recv_buf_size": 4096,
00:40:19.294              "send_buf_size": 4096,
00:40:19.294              "enable_recv_pipe": true,
00:40:19.294              "enable_quickack": false,
00:40:19.294              "enable_placement_id": 0,
00:40:19.294              "enable_zerocopy_send_server": true,
00:40:19.294              "enable_zerocopy_send_client": false,
00:40:19.294              "zerocopy_threshold": 0,
00:40:19.294              "tls_version": 0,
00:40:19.294              "enable_ktls": false
00:40:19.294            }
00:40:19.294          },
00:40:19.294          {
00:40:19.294            "method": "sock_impl_set_options",
00:40:19.294            "params": {
00:40:19.294              "impl_name": "posix",
00:40:19.294              "recv_buf_size": 2097152,
00:40:19.294              "send_buf_size": 2097152,
00:40:19.294              "enable_recv_pipe": true,
00:40:19.294              "enable_quickack": false,
00:40:19.294              "enable_placement_id": 0,
00:40:19.294              "enable_zerocopy_send_server": true,
00:40:19.294              "enable_zerocopy_send_client": false,
00:40:19.294              "zerocopy_threshold": 0,
00:40:19.294              "tls_version": 0,
00:40:19.294              "enable_ktls": false
00:40:19.294            }
00:40:19.294          }
00:40:19.294        ]
00:40:19.294      },
00:40:19.294      {
00:40:19.294        "subsystem": "vmd",
00:40:19.294        "config": []
00:40:19.294      },
00:40:19.294      {
00:40:19.294        "subsystem": "accel",
00:40:19.294        "config": [
00:40:19.294          {
00:40:19.294            "method": "accel_set_options",
00:40:19.294            "params": {
00:40:19.294              "small_cache_size": 128,
00:40:19.294              "large_cache_size": 16,
00:40:19.294              "task_count": 2048,
00:40:19.294              "sequence_count": 2048,
00:40:19.294              "buf_count": 2048
00:40:19.294            }
00:40:19.294          }
00:40:19.294        ]
00:40:19.294      },
00:40:19.294      {
00:40:19.294        "subsystem": "bdev",
00:40:19.294        "config": [
00:40:19.294          {
00:40:19.294            "method": "bdev_set_options",
00:40:19.294            "params": {
00:40:19.294              "bdev_io_pool_size": 65535,
00:40:19.294              "bdev_io_cache_size": 256,
00:40:19.294              "bdev_auto_examine": true,
00:40:19.294              "iobuf_small_cache_size": 128,
00:40:19.294              "iobuf_large_cache_size": 16
00:40:19.294            }
00:40:19.294          },
00:40:19.294          {
00:40:19.294            "method": "bdev_raid_set_options",
00:40:19.294            "params": {
00:40:19.294              "process_window_size_kb": 1024,
00:40:19.294              "process_max_bandwidth_mb_sec": 0
00:40:19.294            }
00:40:19.294          },
00:40:19.294          {
00:40:19.294            "method": "bdev_iscsi_set_options",
00:40:19.294            "params": {
00:40:19.294              "timeout_sec": 30
00:40:19.294            }
00:40:19.294          },
00:40:19.294          {
00:40:19.294            "method": "bdev_nvme_set_options",
00:40:19.294            "params": {
00:40:19.294              "action_on_timeout": "none",
00:40:19.294              "timeout_us": 0,
00:40:19.294              "timeout_admin_us": 0,
00:40:19.294              "keep_alive_timeout_ms": 10000,
00:40:19.294              "arbitration_burst": 0,
00:40:19.294              "low_priority_weight": 0,
00:40:19.294              "medium_priority_weight": 0,
00:40:19.294              "high_priority_weight": 0,
00:40:19.294              "nvme_adminq_poll_period_us": 10000,
00:40:19.294              "nvme_ioq_poll_period_us": 0,
00:40:19.294              "io_queue_requests": 512,
00:40:19.294              "delay_cmd_submit": true,
00:40:19.294              "transport_retry_count": 4,
00:40:19.294              "bdev_retry_count": 3,
00:40:19.294              "transport_ack_timeout": 0,
00:40:19.294              "ctrlr_loss_timeout_sec": 0,
00:40:19.294              "reconnect_delay_sec": 0,
00:40:19.294              "fast_io_fail_timeout_sec": 0,
00:40:19.294              "disable_auto_failback": false,
00:40:19.294              "generate_uuids": false,
00:40:19.294              "transport_tos": 0,
00:40:19.294              "nvme_error_stat": false,
00:40:19.294              "rdma_srq_size": 0,
00:40:19.294              "io_path_stat": false,
00:40:19.294              "allow_accel_sequence": false,
00:40:19.294              "rdma_max_cq_size": 0,
00:40:19.294              "rdma_cm_event_timeout_ms": 0,
00:40:19.294              "dhchap_digests": [
00:40:19.294                "sha256",
00:40:19.294                "sha384",
00:40:19.294                "sha512"
00:40:19.294              ],
00:40:19.294              "dhchap_dhgroups": [
00:40:19.294                "null",
00:40:19.294                "ffdhe2048",
00:40:19.294                "ffdhe3072",
00:40:19.294                "ffdhe4096",
00:40:19.294                "ffdhe6144",
00:40:19.294                "ffdhe8192"
00:40:19.294              ]
00:40:19.294            }
00:40:19.294          },
00:40:19.294          {
00:40:19.294            "method": "bdev_nvme_attach_controller",
00:40:19.294            "params": {
00:40:19.294              "name": "nvme0",
00:40:19.294              "trtype": "TCP",
00:40:19.294              "adrfam": "IPv4",
00:40:19.295              "traddr": "127.0.0.1",
00:40:19.295              "trsvcid": "4420",
00:40:19.295              "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:40:19.295              "prchk_reftag": false,
00:40:19.295              "prchk_guard": false,
00:40:19.295              "ctrlr_loss_timeout_sec": 0,
00:40:19.295              "reconnect_delay_sec": 0,
00:40:19.295              "fast_io_fail_timeout_sec": 0,
00:40:19.295              "psk": "key0",
00:40:19.295              "hostnqn": "nqn.2016-06.io.spdk:host0",
00:40:19.295              "hdgst": false,
00:40:19.295              "ddgst": false,
00:40:19.295              "multipath": "multipath"
00:40:19.295            }
00:40:19.295          },
00:40:19.295          {
00:40:19.295            "method": "bdev_nvme_set_hotplug",
00:40:19.295            "params": {
00:40:19.295              "period_us": 100000,
00:40:19.295              "enable": false
00:40:19.295            }
00:40:19.295          },
00:40:19.295          {
00:40:19.295            "method": "bdev_wait_for_examine"
00:40:19.295          }
00:40:19.295        ]
00:40:19.295      },
00:40:19.295      {
00:40:19.295        "subsystem": "nbd",
00:40:19.295        "config": []
00:40:19.295      }
00:40:19.295    ]
00:40:19.295  }'
00:40:19.295   00:21:34 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable
00:40:19.295   00:21:34 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:40:19.295  [2024-12-10 00:21:35.021085] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:40:19.295  [2024-12-10 00:21:35.021133] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3371168 ]
00:40:19.295  [2024-12-10 00:21:35.095662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:40:19.295  [2024-12-10 00:21:35.136321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:40:19.578  [2024-12-10 00:21:35.296682] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:40:20.183   00:21:35 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:40:20.183   00:21:35 keyring_file -- common/autotest_common.sh@868 -- # return 0
00:40:20.183    00:21:35 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys
00:40:20.183    00:21:35 keyring_file -- keyring/file.sh@121 -- # jq length
00:40:20.183    00:21:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:20.439   00:21:36 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 ))
00:40:20.439    00:21:36 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0
00:40:20.439    00:21:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:40:20.439    00:21:36 keyring_file -- keyring/common.sh@12 -- # get_key key0
00:40:20.439    00:21:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:20.439    00:21:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")'
00:40:20.439    00:21:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:20.439   00:21:36 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 ))
00:40:20.439    00:21:36 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1
00:40:20.439    00:21:36 keyring_file -- keyring/common.sh@12 -- # get_key key1
00:40:20.439    00:21:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:20.439    00:21:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:20.439    00:21:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt
00:40:20.439    00:21:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")'
00:40:20.696   00:21:36 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 ))
00:40:20.696    00:21:36 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers
00:40:20.696    00:21:36 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name'
00:40:20.696    00:21:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers
00:40:20.953   00:21:36 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]]
00:40:20.953   00:21:36 keyring_file -- keyring/file.sh@1 -- # cleanup
00:40:20.953   00:21:36 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.H1JA2PRtn1 /tmp/tmp.q3nrrN5rWU
00:40:20.953   00:21:36 keyring_file -- keyring/file.sh@20 -- # killprocess 3371168
00:40:20.953   00:21:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3371168 ']'
00:40:20.953   00:21:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3371168
00:40:20.953    00:21:36 keyring_file -- common/autotest_common.sh@959 -- # uname
00:40:20.953   00:21:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:40:20.953    00:21:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3371168
00:40:20.953   00:21:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:40:20.953   00:21:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:40:20.953   00:21:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3371168'
00:40:20.953  killing process with pid 3371168
00:40:20.953   00:21:36 keyring_file -- common/autotest_common.sh@973 -- # kill 3371168
00:40:20.953  Received shutdown signal, test time was about 1.000000 seconds
00:40:20.953  
00:40:20.953                                                                                                  Latency(us)
00:40:20.953  
[2024-12-09T23:21:36.810Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:40:20.953  
[2024-12-09T23:21:36.810Z]  ===================================================================================================================
00:40:20.953  
[2024-12-09T23:21:36.810Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00 18446744073709551616.00       0.00
00:40:20.953   00:21:36 keyring_file -- common/autotest_common.sh@978 -- # wait 3371168
00:40:21.211   00:21:36 keyring_file -- keyring/file.sh@21 -- # killprocess 3369660
00:40:21.211   00:21:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3369660 ']'
00:40:21.211   00:21:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3369660
00:40:21.211    00:21:36 keyring_file -- common/autotest_common.sh@959 -- # uname
00:40:21.211   00:21:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:40:21.211    00:21:36 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3369660
00:40:21.211   00:21:36 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:40:21.211   00:21:36 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:40:21.211   00:21:36 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3369660'
00:40:21.211  killing process with pid 3369660
00:40:21.211   00:21:36 keyring_file -- common/autotest_common.sh@973 -- # kill 3369660
00:40:21.211   00:21:36 keyring_file -- common/autotest_common.sh@978 -- # wait 3369660
00:40:21.470  
00:40:21.470  real	0m11.754s
00:40:21.470  user	0m29.167s
00:40:21.470  sys	0m2.781s
00:40:21.470   00:21:37 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable
00:40:21.470   00:21:37 keyring_file -- common/autotest_common.sh@10 -- # set +x
00:40:21.470  ************************************
00:40:21.470  END TEST keyring_file
00:40:21.470  ************************************
00:40:21.470   00:21:37  -- spdk/autotest.sh@293 -- # [[ y == y ]]
00:40:21.470   00:21:37  -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh
00:40:21.470   00:21:37  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:40:21.470   00:21:37  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:40:21.470   00:21:37  -- common/autotest_common.sh@10 -- # set +x
00:40:21.470  ************************************
00:40:21.470  START TEST keyring_linux
00:40:21.470  ************************************
00:40:21.470   00:21:37 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh
00:40:21.470  Joined session keyring: 369955794
00:40:21.729  * Looking for test storage...
00:40:21.729  * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring
00:40:21.729    00:21:37 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:40:21.729     00:21:37 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version
00:40:21.729     00:21:37 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:40:21.729    00:21:37 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@336 -- # IFS=.-:
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@337 -- # IFS=.-:
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@338 -- # local 'op=<'
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@344 -- # case "$op" in
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@345 -- # : 1
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 ))
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:40:21.729     00:21:37 keyring_linux -- scripts/common.sh@365 -- # decimal 1
00:40:21.729     00:21:37 keyring_linux -- scripts/common.sh@353 -- # local d=1
00:40:21.729     00:21:37 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:40:21.729     00:21:37 keyring_linux -- scripts/common.sh@355 -- # echo 1
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1
00:40:21.729     00:21:37 keyring_linux -- scripts/common.sh@366 -- # decimal 2
00:40:21.729     00:21:37 keyring_linux -- scripts/common.sh@353 -- # local d=2
00:40:21.729     00:21:37 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:40:21.729     00:21:37 keyring_linux -- scripts/common.sh@355 -- # echo 2
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:40:21.729    00:21:37 keyring_linux -- scripts/common.sh@368 -- # return 0
00:40:21.729    00:21:37 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:40:21.729    00:21:37 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:40:21.729  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:40:21.729  		--rc genhtml_branch_coverage=1
00:40:21.729  		--rc genhtml_function_coverage=1
00:40:21.729  		--rc genhtml_legend=1
00:40:21.729  		--rc geninfo_all_blocks=1
00:40:21.729  		--rc geninfo_unexecuted_blocks=1
00:40:21.729  		
00:40:21.729  		'
00:40:21.729    00:21:37 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:40:21.729  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:40:21.729  		--rc genhtml_branch_coverage=1
00:40:21.729  		--rc genhtml_function_coverage=1
00:40:21.729  		--rc genhtml_legend=1
00:40:21.729  		--rc geninfo_all_blocks=1
00:40:21.729  		--rc geninfo_unexecuted_blocks=1
00:40:21.729  		
00:40:21.729  		'
00:40:21.729    00:21:37 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:40:21.729  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:40:21.729  		--rc genhtml_branch_coverage=1
00:40:21.729  		--rc genhtml_function_coverage=1
00:40:21.729  		--rc genhtml_legend=1
00:40:21.729  		--rc geninfo_all_blocks=1
00:40:21.729  		--rc geninfo_unexecuted_blocks=1
00:40:21.729  		
00:40:21.729  		'
00:40:21.729    00:21:37 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:40:21.729  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:40:21.729  		--rc genhtml_branch_coverage=1
00:40:21.729  		--rc genhtml_function_coverage=1
00:40:21.729  		--rc genhtml_legend=1
00:40:21.729  		--rc geninfo_all_blocks=1
00:40:21.729  		--rc geninfo_unexecuted_blocks=1
00:40:21.729  		
00:40:21.729  		'
00:40:21.729   00:21:37 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh
00:40:21.729    00:21:37 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh
00:40:21.729      00:21:37 keyring_linux -- nvmf/common.sh@7 -- # uname -s
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:40:21.729      00:21:37 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh
00:40:21.729      00:21:37 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob
00:40:21.729      00:21:37 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:40:21.729      00:21:37 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:40:21.729      00:21:37 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:40:21.729       00:21:37 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:40:21.729       00:21:37 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:40:21.729       00:21:37 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:40:21.729       00:21:37 keyring_linux -- paths/export.sh@5 -- # export PATH
00:40:21.729       00:21:37 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@51 -- # : 0
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:40:21.729     00:21:37 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:40:21.730     00:21:37 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:40:21.730     00:21:37 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:40:21.730  /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:40:21.730     00:21:37 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:40:21.730     00:21:37 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:40:21.730     00:21:37 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0
00:40:21.730    00:21:37 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock
00:40:21.730   00:21:37 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0
00:40:21.730   00:21:37 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0
00:40:21.730   00:21:37 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff
00:40:21.730   00:21:37 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00
00:40:21.730   00:21:37 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT
00:40:21.730   00:21:37 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@17 -- # name=key0
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@17 -- # digest=0
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0
00:40:21.730   00:21:37 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0
00:40:21.730   00:21:37 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest
00:40:21.730   00:21:37 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:40:21.730   00:21:37 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff
00:40:21.730   00:21:37 keyring_linux -- nvmf/common.sh@732 -- # digest=0
00:40:21.730   00:21:37 keyring_linux -- nvmf/common.sh@733 -- # python -
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0
00:40:21.730  /tmp/:spdk-test:key0
00:40:21.730   00:21:37 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@17 -- # name=key1
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@17 -- # digest=0
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1
00:40:21.730   00:21:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0
00:40:21.730   00:21:37 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0
00:40:21.730   00:21:37 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest
00:40:21.730   00:21:37 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1
00:40:21.730   00:21:37 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00
00:40:21.730   00:21:37 keyring_linux -- nvmf/common.sh@732 -- # digest=0
00:40:21.730   00:21:37 keyring_linux -- nvmf/common.sh@733 -- # python -
00:40:21.988   00:21:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1
00:40:21.988   00:21:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1
00:40:21.988  /tmp/:spdk-test:key1
00:40:21.988   00:21:37 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt
00:40:21.988   00:21:37 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3371698
00:40:21.988   00:21:37 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3371698
00:40:21.988   00:21:37 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3371698 ']'
00:40:21.988   00:21:37 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:40:21.988   00:21:37 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100
00:40:21.988   00:21:37 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:40:21.988  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:40:21.988   00:21:37 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable
00:40:21.988   00:21:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x
00:40:21.988  [2024-12-10 00:21:37.638623] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:40:21.988  [2024-12-10 00:21:37.638669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3371698 ]
00:40:21.988  [2024-12-10 00:21:37.712840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:40:21.988  [2024-12-10 00:21:37.753463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:40:22.246   00:21:37 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:40:22.246   00:21:37 keyring_linux -- common/autotest_common.sh@868 -- # return 0
00:40:22.246   00:21:37 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd
00:40:22.246   00:21:37 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable
00:40:22.246   00:21:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x
00:40:22.246  [2024-12-10 00:21:37.974047] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:40:22.246  null0
00:40:22.246  [2024-12-10 00:21:38.006101] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental
00:40:22.246  [2024-12-10 00:21:38.006396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 ***
00:40:22.246   00:21:38 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:40:22.246   00:21:38 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s
00:40:22.246  534970989
00:40:22.246   00:21:38 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s
00:40:22.246  577529170
00:40:22.246   00:21:38 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3371824
00:40:22.246   00:21:38 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3371824 /var/tmp/bperf.sock
00:40:22.246   00:21:38 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc
00:40:22.246   00:21:38 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3371824 ']'
00:40:22.246   00:21:38 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock
00:40:22.246   00:21:38 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100
00:40:22.246   00:21:38 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...'
00:40:22.246  Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...
00:40:22.247   00:21:38 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable
00:40:22.247   00:21:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x
00:40:22.247  [2024-12-10 00:21:38.079208] Starting SPDK v25.01-pre git sha1 06358c250 / DPDK 24.03.0 initialization...
00:40:22.247  [2024-12-10 00:21:38.079251] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3371824 ]
00:40:22.504  [2024-12-10 00:21:38.152601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:40:22.504  [2024-12-10 00:21:38.193582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:40:22.504   00:21:38 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:40:22.504   00:21:38 keyring_linux -- common/autotest_common.sh@868 -- # return 0
00:40:22.504   00:21:38 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable
00:40:22.504   00:21:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable
00:40:22.761   00:21:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init
00:40:22.761   00:21:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init
00:40:23.019   00:21:38 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0
00:40:23.019   00:21:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0
00:40:23.019  [2024-12-10 00:21:38.855090] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental
00:40:23.276  nvme0n1
00:40:23.276   00:21:38 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0
00:40:23.276   00:21:38 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0
00:40:23.276   00:21:38 keyring_linux -- keyring/linux.sh@20 -- # local sn
00:40:23.276    00:21:38 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys
00:40:23.276    00:21:38 keyring_linux -- keyring/linux.sh@22 -- # jq length
00:40:23.276    00:21:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:23.276   00:21:39 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count ))
00:40:23.532   00:21:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 ))
00:40:23.532    00:21:39 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0
00:40:23.532    00:21:39 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn
00:40:23.533    00:21:39 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys
00:40:23.533    00:21:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:23.533    00:21:39 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")'
00:40:23.533   00:21:39 keyring_linux -- keyring/linux.sh@25 -- # sn=534970989
00:40:23.533    00:21:39 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0
00:40:23.533    00:21:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0
00:40:23.533   00:21:39 keyring_linux -- keyring/linux.sh@26 -- # [[ 534970989 == \5\3\4\9\7\0\9\8\9 ]]
00:40:23.533    00:21:39 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 534970989
00:40:23.533   00:21:39 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]]
00:40:23.533   00:21:39 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests
00:40:23.790  Running I/O for 1 seconds...
00:40:24.721      21374.00 IOPS,    83.49 MiB/s
00:40:24.721                                                                                                  Latency(us)
00:40:24.721  
[2024-12-09T23:21:40.578Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:40:24.721  Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096)
00:40:24.721  	 nvme0n1             :       1.01   21375.18      83.50       0.00     0.00    5968.41    4649.94   12607.88
00:40:24.721  
[2024-12-09T23:21:40.578Z]  ===================================================================================================================
00:40:24.721  
[2024-12-09T23:21:40.578Z]  Total                       :              21375.18      83.50       0.00     0.00    5968.41    4649.94   12607.88
00:40:24.721  {
00:40:24.721    "results": [
00:40:24.721      {
00:40:24.721        "job": "nvme0n1",
00:40:24.721        "core_mask": "0x2",
00:40:24.721        "workload": "randread",
00:40:24.721        "status": "finished",
00:40:24.721        "queue_depth": 128,
00:40:24.721        "io_size": 4096,
00:40:24.721        "runtime": 1.00598,
00:40:24.721        "iops": 21375.17644485974,
00:40:24.721        "mibps": 83.49678298773335,
00:40:24.721        "io_failed": 0,
00:40:24.721        "io_timeout": 0,
00:40:24.721        "avg_latency_us": 5968.407435330176,
00:40:24.721        "min_latency_us": 4649.935238095238,
00:40:24.721        "max_latency_us": 12607.878095238095
00:40:24.721      }
00:40:24.721    ],
00:40:24.721    "core_count": 1
00:40:24.721  }
00:40:24.721   00:21:40 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0
00:40:24.721   00:21:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0
00:40:24.978   00:21:40 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0
00:40:24.978   00:21:40 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name=
00:40:24.978   00:21:40 keyring_linux -- keyring/linux.sh@20 -- # local sn
00:40:24.978    00:21:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys
00:40:24.978    00:21:40 keyring_linux -- keyring/linux.sh@22 -- # jq length
00:40:24.978    00:21:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys
00:40:25.236   00:21:40 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count ))
00:40:25.236   00:21:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 ))
00:40:25.236   00:21:40 keyring_linux -- keyring/linux.sh@23 -- # return
00:40:25.236   00:21:40 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1
00:40:25.236   00:21:40 keyring_linux -- common/autotest_common.sh@652 -- # local es=0
00:40:25.236   00:21:40 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1
00:40:25.236   00:21:40 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd
00:40:25.236   00:21:40 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:40:25.236    00:21:40 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd
00:40:25.236   00:21:40 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:40:25.236   00:21:40 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1
00:40:25.236   00:21:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1
00:40:25.236  [2024-12-10 00:21:41.034992] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected
00:40:25.236  [2024-12-10 00:21:41.035674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f6220 (107): Transport endpoint is not connected
00:40:25.236  [2024-12-10 00:21:41.036668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8f6220 (9): Bad file descriptor
00:40:25.236  [2024-12-10 00:21:41.037669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state
00:40:25.236  [2024-12-10 00:21:41.037680] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1
00:40:25.236  [2024-12-10 00:21:41.037687] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted
00:40:25.236  [2024-12-10 00:21:41.037695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state.
00:40:25.236  request:
00:40:25.236  {
00:40:25.236    "name": "nvme0",
00:40:25.236    "trtype": "tcp",
00:40:25.236    "traddr": "127.0.0.1",
00:40:25.236    "adrfam": "ipv4",
00:40:25.236    "trsvcid": "4420",
00:40:25.236    "subnqn": "nqn.2016-06.io.spdk:cnode0",
00:40:25.236    "hostnqn": "nqn.2016-06.io.spdk:host0",
00:40:25.236    "prchk_reftag": false,
00:40:25.236    "prchk_guard": false,
00:40:25.236    "hdgst": false,
00:40:25.236    "ddgst": false,
00:40:25.236    "psk": ":spdk-test:key1",
00:40:25.236    "allow_unrecognized_csi": false,
00:40:25.236    "method": "bdev_nvme_attach_controller",
00:40:25.236    "req_id": 1
00:40:25.236  }
00:40:25.236  Got JSON-RPC error response
00:40:25.236  response:
00:40:25.236  {
00:40:25.236    "code": -5,
00:40:25.236    "message": "Input/output error"
00:40:25.236  }
00:40:25.236   00:21:41 keyring_linux -- common/autotest_common.sh@655 -- # es=1
00:40:25.236   00:21:41 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:40:25.236   00:21:41 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:40:25.236   00:21:41 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:40:25.236   00:21:41 keyring_linux -- keyring/linux.sh@1 -- # cleanup
00:40:25.236   00:21:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1
00:40:25.236   00:21:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0
00:40:25.236   00:21:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn
00:40:25.236    00:21:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0
00:40:25.236    00:21:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0
00:40:25.236   00:21:41 keyring_linux -- keyring/linux.sh@33 -- # sn=534970989
00:40:25.236   00:21:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 534970989
00:40:25.236  1 links removed
00:40:25.236   00:21:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1
00:40:25.236   00:21:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1
00:40:25.236   00:21:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn
00:40:25.236    00:21:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1
00:40:25.236    00:21:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1
00:40:25.236   00:21:41 keyring_linux -- keyring/linux.sh@33 -- # sn=577529170
00:40:25.236   00:21:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 577529170
00:40:25.236  1 links removed
00:40:25.236   00:21:41 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3371824
00:40:25.236   00:21:41 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3371824 ']'
00:40:25.236   00:21:41 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3371824
00:40:25.236    00:21:41 keyring_linux -- common/autotest_common.sh@959 -- # uname
00:40:25.236   00:21:41 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:40:25.236    00:21:41 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3371824
00:40:25.494   00:21:41 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:40:25.494   00:21:41 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:40:25.494   00:21:41 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3371824'
00:40:25.494  killing process with pid 3371824
00:40:25.494   00:21:41 keyring_linux -- common/autotest_common.sh@973 -- # kill 3371824
00:40:25.494  Received shutdown signal, test time was about 1.000000 seconds
00:40:25.494  
00:40:25.494                                                                                                  Latency(us)
00:40:25.494  
[2024-12-09T23:21:41.351Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:40:25.494  
[2024-12-09T23:21:41.351Z]  ===================================================================================================================
00:40:25.494  
[2024-12-09T23:21:41.351Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:40:25.494   00:21:41 keyring_linux -- common/autotest_common.sh@978 -- # wait 3371824
00:40:25.494   00:21:41 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3371698
00:40:25.494   00:21:41 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3371698 ']'
00:40:25.494   00:21:41 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3371698
00:40:25.494    00:21:41 keyring_linux -- common/autotest_common.sh@959 -- # uname
00:40:25.494   00:21:41 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:40:25.494    00:21:41 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3371698
00:40:25.494   00:21:41 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:40:25.494   00:21:41 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:40:25.494   00:21:41 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3371698'
00:40:25.494  killing process with pid 3371698
00:40:25.494   00:21:41 keyring_linux -- common/autotest_common.sh@973 -- # kill 3371698
00:40:25.494   00:21:41 keyring_linux -- common/autotest_common.sh@978 -- # wait 3371698
00:40:26.060  
00:40:26.060  real	0m4.322s
00:40:26.060  user	0m8.118s
00:40:26.060  sys	0m1.478s
00:40:26.060   00:21:41 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable
00:40:26.060   00:21:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x
00:40:26.060  ************************************
00:40:26.060  END TEST keyring_linux
00:40:26.060  ************************************
00:40:26.060   00:21:41  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:40:26.060   00:21:41  -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']'
00:40:26.060   00:21:41  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:40:26.060   00:21:41  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:40:26.060   00:21:41  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:40:26.060   00:21:41  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:40:26.060   00:21:41  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:40:26.060   00:21:41  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:40:26.060   00:21:41  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:40:26.060   00:21:41  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:40:26.060   00:21:41  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:40:26.060   00:21:41  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:40:26.060   00:21:41  -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]]
00:40:26.060   00:21:41  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:40:26.060   00:21:41  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:40:26.060   00:21:41  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:40:26.060   00:21:41  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:40:26.060   00:21:41  -- common/autotest_common.sh@726 -- # xtrace_disable
00:40:26.060   00:21:41  -- common/autotest_common.sh@10 -- # set +x
00:40:26.060   00:21:41  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:40:26.060   00:21:41  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:40:26.060   00:21:41  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:40:26.060   00:21:41  -- common/autotest_common.sh@10 -- # set +x
00:40:31.330  INFO: APP EXITING
00:40:31.330  INFO: killing all VMs
00:40:31.330  INFO: killing vhost app
00:40:31.330  INFO: EXIT DONE
00:40:33.862  0000:5e:00.0 (8086 0a54): Already using the nvme driver
00:40:33.862  0000:00:04.7 (8086 2021): Already using the ioatdma driver
00:40:34.121  0000:00:04.6 (8086 2021): Already using the ioatdma driver
00:40:34.121  0000:00:04.5 (8086 2021): Already using the ioatdma driver
00:40:34.121  0000:00:04.4 (8086 2021): Already using the ioatdma driver
00:40:34.121  0000:00:04.3 (8086 2021): Already using the ioatdma driver
00:40:34.121  0000:00:04.2 (8086 2021): Already using the ioatdma driver
00:40:34.121  0000:00:04.1 (8086 2021): Already using the ioatdma driver
00:40:34.121  0000:00:04.0 (8086 2021): Already using the ioatdma driver
00:40:34.121  0000:80:04.7 (8086 2021): Already using the ioatdma driver
00:40:34.121  0000:80:04.6 (8086 2021): Already using the ioatdma driver
00:40:34.121  0000:80:04.5 (8086 2021): Already using the ioatdma driver
00:40:34.121  0000:80:04.4 (8086 2021): Already using the ioatdma driver
00:40:34.121  0000:80:04.3 (8086 2021): Already using the ioatdma driver
00:40:34.380  0000:80:04.2 (8086 2021): Already using the ioatdma driver
00:40:34.380  0000:80:04.1 (8086 2021): Already using the ioatdma driver
00:40:34.380  0000:80:04.0 (8086 2021): Already using the ioatdma driver
00:40:37.667  Cleaning
00:40:37.667  Removing:    /var/run/dpdk/spdk0/config
00:40:37.667  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:40:37.667  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:40:37.668  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:40:37.668  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:40:37.668  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0
00:40:37.668  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1
00:40:37.668  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2
00:40:37.668  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3
00:40:37.668  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:40:37.668  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:40:37.668  Removing:    /var/run/dpdk/spdk1/config
00:40:37.668  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0
00:40:37.668  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1
00:40:37.668  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2
00:40:37.668  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3
00:40:37.668  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0
00:40:37.668  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1
00:40:37.668  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2
00:40:37.668  Removing:    /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3
00:40:37.668  Removing:    /var/run/dpdk/spdk1/fbarray_memzone
00:40:37.668  Removing:    /var/run/dpdk/spdk1/hugepage_info
00:40:37.668  Removing:    /var/run/dpdk/spdk2/config
00:40:37.668  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0
00:40:37.668  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1
00:40:37.668  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2
00:40:37.668  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3
00:40:37.668  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0
00:40:37.668  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1
00:40:37.668  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2
00:40:37.668  Removing:    /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3
00:40:37.668  Removing:    /var/run/dpdk/spdk2/fbarray_memzone
00:40:37.668  Removing:    /var/run/dpdk/spdk2/hugepage_info
00:40:37.668  Removing:    /var/run/dpdk/spdk3/config
00:40:37.668  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0
00:40:37.668  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1
00:40:37.668  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2
00:40:37.668  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3
00:40:37.668  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0
00:40:37.668  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1
00:40:37.668  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2
00:40:37.668  Removing:    /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3
00:40:37.668  Removing:    /var/run/dpdk/spdk3/fbarray_memzone
00:40:37.668  Removing:    /var/run/dpdk/spdk3/hugepage_info
00:40:37.668  Removing:    /var/run/dpdk/spdk4/config
00:40:37.668  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0
00:40:37.668  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1
00:40:37.668  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2
00:40:37.668  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3
00:40:37.668  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0
00:40:37.668  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1
00:40:37.668  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2
00:40:37.668  Removing:    /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3
00:40:37.668  Removing:    /var/run/dpdk/spdk4/fbarray_memzone
00:40:37.668  Removing:    /var/run/dpdk/spdk4/hugepage_info
00:40:37.668  Removing:    /dev/shm/bdev_svc_trace.1
00:40:37.668  Removing:    /dev/shm/nvmf_trace.0
00:40:37.668  Removing:    /dev/shm/spdk_tgt_trace.pid2896257
00:40:37.668  Removing:    /var/run/dpdk/spdk0
00:40:37.668  Removing:    /var/run/dpdk/spdk1
00:40:37.668  Removing:    /var/run/dpdk/spdk2
00:40:37.668  Removing:    /var/run/dpdk/spdk3
00:40:37.668  Removing:    /var/run/dpdk/spdk4
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2894155
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2895197
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2896257
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2896880
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2897805
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2898033
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2898981
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2898997
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2899343
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2900822
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2902274
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2902562
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2902843
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2903141
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2903285
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2903478
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2903719
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2903994
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2904713
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2907716
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2907894
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2908168
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2908193
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2908756
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2908763
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2909241
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2909410
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2909974
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2910118
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2910368
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2910386
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2910934
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2911175
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2911472
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2915113
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2919507
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2929576
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2930169
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2934466
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2934713
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2938901
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2944718
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2947419
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2958021
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2967127
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2968914
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2969814
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2986721
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid2990734
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3036301
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3041586
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3047449
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3053707
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3053792
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3054545
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3055494
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3056770
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3057451
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3057578
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3057883
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3057904
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3057906
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3058797
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3059689
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3060583
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3061037
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3061165
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3061477
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3062476
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3063434
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3071550
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3100219
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3104642
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3106317
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3108048
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3108225
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3108447
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3108470
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3108956
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3110742
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3111701
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3112076
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3114237
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3114718
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3115208
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3119398
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3124890
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3124891
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3124892
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3128603
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3137628
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3141676
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3147699
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3148869
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3150359
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3151650
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3156253
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3160530
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3164480
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3171944
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3171948
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3176578
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3176800
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3177026
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3177461
00:40:37.668  Removing:    /var/run/dpdk/spdk_pid3177474
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3182003
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3182962
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3187436
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3190060
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3195409
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3200875
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3209670
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3216748
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3216750
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3235912
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3236375
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3237010
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3237507
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3238222
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3238691
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3239355
00:40:37.669  Removing:    /var/run/dpdk/spdk_pid3239818
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3243981
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3244229
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3250173
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3250428
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3255712
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3259976
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3269704
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3270162
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3274861
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3275130
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3279271
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3285007
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3287522
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3297401
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3306005
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3307780
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3308678
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3325048
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3328976
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3331626
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3339403
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3339416
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3344360
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3346275
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3348192
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3349298
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3351340
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3352378
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3361168
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3361615
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3362100
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3364630
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3365460
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3365913
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3369660
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3369671
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3371168
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3371698
00:40:37.928  Removing:    /var/run/dpdk/spdk_pid3371824
00:40:37.928  Clean
00:40:37.928   00:21:53  -- common/autotest_common.sh@1453 -- # return 0
00:40:37.928   00:21:53  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:40:37.928   00:21:53  -- common/autotest_common.sh@732 -- # xtrace_disable
00:40:37.928   00:21:53  -- common/autotest_common.sh@10 -- # set +x
00:40:37.928   00:21:53  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:40:37.928   00:21:53  -- common/autotest_common.sh@732 -- # xtrace_disable
00:40:37.928   00:21:53  -- common/autotest_common.sh@10 -- # set +x
00:40:38.187   00:21:53  -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt
00:40:38.187   00:21:53  -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]]
00:40:38.187   00:21:53  -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log
00:40:38.187   00:21:53  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:40:38.187    00:21:53  -- spdk/autotest.sh@398 -- # hostname
00:40:38.187   00:21:53  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info
00:40:38.187  geninfo: WARNING: invalid characters removed from testname!
00:41:00.106   00:22:14  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info
00:41:02.011   00:22:17  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info
00:41:03.388   00:22:19  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info
00:41:05.293   00:22:21  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info
00:41:07.196   00:22:22  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info
00:41:09.100   00:22:24  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info
00:41:11.002   00:22:26  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:41:11.002   00:22:26  -- spdk/autorun.sh@1 -- $ timing_finish
00:41:11.002   00:22:26  -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]]
00:41:11.002   00:22:26  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:41:11.002   00:22:26  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:41:11.002   00:22:26  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt
00:41:11.002  + [[ -n 2817375 ]]
00:41:11.002  + sudo kill 2817375
00:41:11.012  [Pipeline] }
00:41:11.026  [Pipeline] // stage
00:41:11.031  [Pipeline] }
00:41:11.045  [Pipeline] // timeout
00:41:11.049  [Pipeline] }
00:41:11.061  [Pipeline] // catchError
00:41:11.065  [Pipeline] }
00:41:11.076  [Pipeline] // wrap
00:41:11.081  [Pipeline] }
00:41:11.093  [Pipeline] // catchError
00:41:11.101  [Pipeline] stage
00:41:11.103  [Pipeline] { (Epilogue)
00:41:11.115  [Pipeline] catchError
00:41:11.117  [Pipeline] {
00:41:11.129  [Pipeline] echo
00:41:11.130  Cleanup processes
00:41:11.136  [Pipeline] sh
00:41:11.422  + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:41:11.422  3382517 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:41:11.436  [Pipeline] sh
00:41:11.722  ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk
00:41:11.722  ++ grep -v 'sudo pgrep'
00:41:11.722  ++ awk '{print $1}'
00:41:11.722  + sudo kill -9
00:41:11.722  + true
00:41:11.734  [Pipeline] sh
00:41:12.018  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:41:24.231  [Pipeline] sh
00:41:24.515  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:41:24.515  Artifacts sizes are good
00:41:24.527  [Pipeline] archiveArtifacts
00:41:24.534  Archiving artifacts
00:41:24.667  [Pipeline] sh
00:41:25.021  + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest
00:41:25.035  [Pipeline] cleanWs
00:41:25.045  [WS-CLEANUP] Deleting project workspace...
00:41:25.045  [WS-CLEANUP] Deferred wipeout is used...
00:41:25.052  [WS-CLEANUP] done
00:41:25.054  [Pipeline] }
00:41:25.071  [Pipeline] // catchError
00:41:25.082  [Pipeline] sh
00:41:25.364  + logger -p user.info -t JENKINS-CI
00:41:25.372  [Pipeline] }
00:41:25.386  [Pipeline] // stage
00:41:25.391  [Pipeline] }
00:41:25.405  [Pipeline] // node
00:41:25.410  [Pipeline] End of Pipeline
00:41:25.456  Finished: SUCCESS